00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v22.11" build number 1061 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3728 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.102 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.103 The recommended git tool is: git 00:00:00.103 using credential 00000000-0000-0000-0000-000000000002 00:00:00.105 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.144 Fetching changes from the remote Git repository 00:00:00.146 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.179 Using shallow fetch with depth 1 00:00:00.179 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.179 > git --version # timeout=10 00:00:00.205 > git --version # 'git version 2.39.2' 00:00:00.205 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.218 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.218 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.616 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.626 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.637 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.637 > git config core.sparsecheckout # timeout=10 00:00:06.647 > git read-tree -mu HEAD # timeout=10 00:00:06.661 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.681 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.681 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.773 [Pipeline] Start of Pipeline 00:00:06.788 [Pipeline] library 00:00:06.790 Loading library shm_lib@master 00:00:06.790 Library shm_lib@master is cached. Copying from home. 00:00:06.808 [Pipeline] node 00:00:06.818 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:06.820 [Pipeline] { 00:00:06.831 [Pipeline] catchError 00:00:06.833 [Pipeline] { 00:00:06.846 [Pipeline] wrap 00:00:06.854 [Pipeline] { 00:00:06.862 [Pipeline] stage 00:00:06.864 [Pipeline] { (Prologue) 00:00:06.885 [Pipeline] echo 00:00:06.886 Node: VM-host-SM0 00:00:06.893 [Pipeline] cleanWs 00:00:06.903 [WS-CLEANUP] Deleting project workspace... 00:00:06.903 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.909 [WS-CLEANUP] done 00:00:07.133 [Pipeline] setCustomBuildProperty 00:00:07.208 [Pipeline] httpRequest 00:00:07.981 [Pipeline] echo 00:00:07.983 Sorcerer 10.211.164.20 is alive 00:00:07.993 [Pipeline] retry 00:00:07.995 [Pipeline] { 00:00:08.009 [Pipeline] httpRequest 00:00:08.013 HttpMethod: GET 00:00:08.014 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.014 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.028 Response Code: HTTP/1.1 200 OK 00:00:08.029 Success: Status code 200 is in the accepted range: 200,404 00:00:08.029 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.549 [Pipeline] } 00:00:11.566 [Pipeline] // retry 00:00:11.574 [Pipeline] sh 00:00:11.855 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.870 [Pipeline] httpRequest 00:00:12.267 [Pipeline] echo 00:00:12.269 Sorcerer 10.211.164.20 is alive 00:00:12.278 [Pipeline] retry 00:00:12.280 [Pipeline] { 00:00:12.295 [Pipeline] httpRequest 00:00:12.300 HttpMethod: GET 00:00:12.300 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:12.301 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:12.320 Response Code: HTTP/1.1 200 OK 00:00:12.321 Success: Status code 200 is in the accepted range: 200,404 00:00:12.321 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:59.464 [Pipeline] } 00:00:59.482 [Pipeline] // retry 00:00:59.490 [Pipeline] sh 00:00:59.772 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:02.318 [Pipeline] sh 00:01:02.600 + git -C spdk log --oneline -n5 00:01:02.600 c13c99a5e test: Various fixes for Fedora40 00:01:02.600 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:02.600 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:02.600 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:02.600 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:02.619 [Pipeline] withCredentials 00:01:02.631 > git --version # timeout=10 00:01:02.644 > git --version # 'git version 2.39.2' 00:01:02.660 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:02.662 [Pipeline] { 00:01:02.672 [Pipeline] retry 00:01:02.674 [Pipeline] { 00:01:02.690 [Pipeline] sh 00:01:02.972 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:02.983 [Pipeline] } 00:01:03.001 [Pipeline] // retry 00:01:03.006 [Pipeline] } 00:01:03.022 [Pipeline] // withCredentials 00:01:03.032 [Pipeline] httpRequest 00:01:03.415 [Pipeline] echo 00:01:03.416 Sorcerer 10.211.164.20 is alive 00:01:03.425 [Pipeline] retry 00:01:03.426 [Pipeline] { 00:01:03.440 [Pipeline] httpRequest 00:01:03.444 HttpMethod: GET 00:01:03.445 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:03.445 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:03.452 Response Code: HTTP/1.1 200 OK 00:01:03.452 Success: Status code 200 is in the accepted range: 200,404 00:01:03.453 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:24.498 [Pipeline] } 00:01:24.515 [Pipeline] // retry 00:01:24.522 [Pipeline] sh 00:01:24.859 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:26.247 [Pipeline] sh 00:01:26.526 + git -C dpdk log --oneline -n5 00:01:26.526 caf0f5d395 version: 22.11.4 00:01:26.526 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:26.526 dc9c799c7d vhost: fix missing spinlock unlock 00:01:26.526 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:26.526 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:26.542 [Pipeline] writeFile 00:01:26.559 [Pipeline] sh 00:01:26.838 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:26.849 [Pipeline] sh 00:01:27.129 + cat autorun-spdk.conf 00:01:27.129 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.129 SPDK_TEST_NVMF=1 00:01:27.129 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:27.129 SPDK_TEST_USDT=1 00:01:27.129 SPDK_RUN_UBSAN=1 00:01:27.129 SPDK_TEST_NVMF_MDNS=1 00:01:27.129 NET_TYPE=virt 00:01:27.129 SPDK_JSONRPC_GO_CLIENT=1 00:01:27.129 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:27.129 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:27.129 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:27.136 RUN_NIGHTLY=1 00:01:27.137 [Pipeline] } 00:01:27.151 [Pipeline] // stage 00:01:27.165 [Pipeline] stage 00:01:27.167 [Pipeline] { (Run VM) 00:01:27.179 [Pipeline] sh 00:01:27.460 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:27.460 + echo 'Start stage prepare_nvme.sh' 00:01:27.460 Start stage prepare_nvme.sh 00:01:27.460 + [[ -n 6 ]] 00:01:27.460 + disk_prefix=ex6 00:01:27.460 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:01:27.460 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:01:27.460 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:01:27.460 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.460 ++ SPDK_TEST_NVMF=1 00:01:27.460 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:27.460 ++ SPDK_TEST_USDT=1 00:01:27.460 ++ SPDK_RUN_UBSAN=1 00:01:27.460 ++ SPDK_TEST_NVMF_MDNS=1 00:01:27.460 ++ NET_TYPE=virt 00:01:27.460 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:27.460 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:27.460 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:27.460 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:27.460 ++ RUN_NIGHTLY=1 00:01:27.460 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:27.460 + nvme_files=() 00:01:27.460 + declare -A nvme_files 00:01:27.460 + backend_dir=/var/lib/libvirt/images/backends 00:01:27.460 + nvme_files['nvme.img']=5G 00:01:27.460 + nvme_files['nvme-cmb.img']=5G 00:01:27.460 + nvme_files['nvme-multi0.img']=4G 00:01:27.460 + nvme_files['nvme-multi1.img']=4G 00:01:27.460 + nvme_files['nvme-multi2.img']=4G 00:01:27.460 + nvme_files['nvme-openstack.img']=8G 00:01:27.460 + nvme_files['nvme-zns.img']=5G 00:01:27.460 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:27.460 + (( SPDK_TEST_FTL == 1 )) 00:01:27.460 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:27.460 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:27.460 + for nvme in "${!nvme_files[@]}" 00:01:27.460 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:01:27.460 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:27.460 + for nvme in "${!nvme_files[@]}" 00:01:27.460 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:01:27.460 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:27.460 + for nvme in "${!nvme_files[@]}" 00:01:27.460 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:01:27.460 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:27.460 + for nvme in "${!nvme_files[@]}" 00:01:27.460 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:01:27.460 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:27.460 + for nvme in "${!nvme_files[@]}" 00:01:27.460 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:01:27.460 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:27.460 + for nvme in "${!nvme_files[@]}" 00:01:27.460 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:01:27.460 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:27.460 + for nvme in "${!nvme_files[@]}" 00:01:27.460 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:01:27.720 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:27.720 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:01:27.720 + echo 'End stage prepare_nvme.sh' 00:01:27.720 End stage prepare_nvme.sh 00:01:27.731 [Pipeline] sh 00:01:28.011 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:28.011 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -H -a -v -f fedora39 00:01:28.011 00:01:28.011 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:01:28.011 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:01:28.011 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:28.011 HELP=0 00:01:28.011 DRY_RUN=0 00:01:28.011 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img, 00:01:28.011 NVME_DISKS_TYPE=nvme,nvme, 00:01:28.011 NVME_AUTO_CREATE=0 00:01:28.011 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img, 00:01:28.011 NVME_CMB=,, 00:01:28.011 NVME_PMR=,, 00:01:28.011 NVME_ZNS=,, 00:01:28.011 NVME_MS=,, 00:01:28.011 NVME_FDP=,, 00:01:28.011 SPDK_VAGRANT_DISTRO=fedora39 00:01:28.011 SPDK_VAGRANT_VMCPU=10 00:01:28.011 SPDK_VAGRANT_VMRAM=12288 00:01:28.011 SPDK_VAGRANT_PROVIDER=libvirt 00:01:28.011 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:28.011 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:28.011 SPDK_OPENSTACK_NETWORK=0 00:01:28.011 VAGRANT_PACKAGE_BOX=0 00:01:28.011 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:28.011 FORCE_DISTRO=true 00:01:28.011 VAGRANT_BOX_VERSION= 00:01:28.011 EXTRA_VAGRANTFILES= 00:01:28.011 NIC_MODEL=e1000 00:01:28.011 00:01:28.011 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt' 00:01:28.011 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:31.298 Bringing machine 'default' up with 'libvirt' provider... 00:01:31.557 ==> default: Creating image (snapshot of base box volume). 00:01:31.557 ==> default: Creating domain with the following settings... 00:01:31.557 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1734290538_e79fd1118f0f75c37dbd 00:01:31.557 ==> default: -- Domain type: kvm 00:01:31.557 ==> default: -- Cpus: 10 00:01:31.557 ==> default: -- Feature: acpi 00:01:31.557 ==> default: -- Feature: apic 00:01:31.557 ==> default: -- Feature: pae 00:01:31.557 ==> default: -- Memory: 12288M 00:01:31.557 ==> default: -- Memory Backing: hugepages: 00:01:31.557 ==> default: -- Management MAC: 00:01:31.557 ==> default: -- Loader: 00:01:31.557 ==> default: -- Nvram: 00:01:31.557 ==> default: -- Base box: spdk/fedora39 00:01:31.557 ==> default: -- Storage pool: default 00:01:31.557 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1734290538_e79fd1118f0f75c37dbd.img (20G) 00:01:31.557 ==> default: -- Volume Cache: default 00:01:31.557 ==> default: -- Kernel: 00:01:31.557 ==> default: -- Initrd: 00:01:31.557 ==> default: -- Graphics Type: vnc 00:01:31.557 ==> default: -- Graphics Port: -1 00:01:31.557 ==> default: -- Graphics IP: 127.0.0.1 00:01:31.557 ==> default: -- Graphics Password: Not defined 00:01:31.557 ==> default: -- Video Type: cirrus 00:01:31.557 ==> default: -- Video VRAM: 9216 00:01:31.557 ==> default: -- Sound Type: 00:01:31.557 ==> default: -- Keymap: en-us 00:01:31.557 ==> default: -- TPM Path: 00:01:31.557 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:31.557 ==> default: -- Command line args: 00:01:31.557 ==> default: -> value=-device, 00:01:31.557 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:31.557 ==> default: -> value=-drive, 00:01:31.557 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:01:31.557 ==> default: -> value=-device, 00:01:31.557 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:31.557 ==> default: -> value=-device, 00:01:31.557 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:31.557 ==> default: -> value=-drive, 00:01:31.557 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:31.557 ==> default: -> value=-device, 00:01:31.557 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:31.557 ==> default: -> value=-drive, 00:01:31.557 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:31.557 ==> default: -> value=-device, 00:01:31.557 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:31.557 ==> default: -> value=-drive, 00:01:31.557 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:31.557 ==> default: -> value=-device, 00:01:31.557 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:31.816 ==> default: Creating shared folders metadata... 00:01:31.816 ==> default: Starting domain. 00:01:33.721 ==> default: Waiting for domain to get an IP address... 00:01:48.597 ==> default: Waiting for SSH to become available... 00:01:49.973 ==> default: Configuring and enabling network interfaces... 00:01:54.190 default: SSH address: 192.168.121.243:22 00:01:54.190 default: SSH username: vagrant 00:01:54.190 default: SSH auth method: private key 00:01:56.720 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:03.276 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:09.839 ==> default: Mounting SSHFS shared folder... 00:02:10.774 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:10.774 ==> default: Checking Mount.. 00:02:12.148 ==> default: Folder Successfully Mounted! 00:02:12.148 ==> default: Running provisioner: file... 00:02:12.715 default: ~/.gitconfig => .gitconfig 00:02:13.282 00:02:13.282 SUCCESS! 00:02:13.282 00:02:13.282 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:13.282 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:13.282 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:13.282 00:02:13.289 [Pipeline] } 00:02:13.303 [Pipeline] // stage 00:02:13.311 [Pipeline] dir 00:02:13.311 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt 00:02:13.313 [Pipeline] { 00:02:13.324 [Pipeline] catchError 00:02:13.326 [Pipeline] { 00:02:13.337 [Pipeline] sh 00:02:13.614 + vagrant ssh-config --host vagrant 00:02:13.614 + sed -ne /^Host/,$p 00:02:13.614 + tee ssh_conf 00:02:16.898 Host vagrant 00:02:16.898 HostName 192.168.121.243 00:02:16.898 User vagrant 00:02:16.898 Port 22 00:02:16.898 UserKnownHostsFile /dev/null 00:02:16.898 StrictHostKeyChecking no 00:02:16.898 PasswordAuthentication no 00:02:16.898 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:16.898 IdentitiesOnly yes 00:02:16.898 LogLevel FATAL 00:02:16.898 ForwardAgent yes 00:02:16.898 ForwardX11 yes 00:02:16.898 00:02:16.911 [Pipeline] withEnv 00:02:16.914 [Pipeline] { 00:02:16.927 [Pipeline] sh 00:02:17.207 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:17.207 source /etc/os-release 00:02:17.207 [[ -e /image.version ]] && img=$(< /image.version) 00:02:17.207 # Minimal, systemd-like check. 00:02:17.207 if [[ -e /.dockerenv ]]; then 00:02:17.207 # Clear garbage from the node's name: 00:02:17.207 # agt-er_autotest_547-896 -> autotest_547-896 00:02:17.207 # $HOSTNAME is the actual container id 00:02:17.207 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:17.207 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:17.207 # We can assume this is a mount from a host where container is running, 00:02:17.207 # so fetch its hostname to easily identify the target swarm worker. 00:02:17.207 container="$(< /etc/hostname) ($agent)" 00:02:17.207 else 00:02:17.207 # Fallback 00:02:17.207 container=$agent 00:02:17.207 fi 00:02:17.207 fi 00:02:17.207 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:17.207 00:02:17.477 [Pipeline] } 00:02:17.492 [Pipeline] // withEnv 00:02:17.500 [Pipeline] setCustomBuildProperty 00:02:17.513 [Pipeline] stage 00:02:17.515 [Pipeline] { (Tests) 00:02:17.530 [Pipeline] sh 00:02:17.822 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:17.872 [Pipeline] sh 00:02:18.182 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:18.452 [Pipeline] timeout 00:02:18.452 Timeout set to expire in 1 hr 0 min 00:02:18.454 [Pipeline] { 00:02:18.466 [Pipeline] sh 00:02:18.744 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:19.311 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:02:19.323 [Pipeline] sh 00:02:19.603 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:19.875 [Pipeline] sh 00:02:20.156 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:20.429 [Pipeline] sh 00:02:20.709 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:02:20.968 ++ readlink -f spdk_repo 00:02:20.968 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:20.968 + [[ -n /home/vagrant/spdk_repo ]] 00:02:20.968 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:20.968 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:20.968 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:20.968 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:20.968 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:20.968 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:02:20.968 + cd /home/vagrant/spdk_repo 00:02:20.968 + source /etc/os-release 00:02:20.968 ++ NAME='Fedora Linux' 00:02:20.968 ++ VERSION='39 (Cloud Edition)' 00:02:20.968 ++ ID=fedora 00:02:20.968 ++ VERSION_ID=39 00:02:20.968 ++ VERSION_CODENAME= 00:02:20.968 ++ PLATFORM_ID=platform:f39 00:02:20.968 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:20.968 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:20.968 ++ LOGO=fedora-logo-icon 00:02:20.968 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:20.968 ++ HOME_URL=https://fedoraproject.org/ 00:02:20.968 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:20.968 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:20.968 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:20.968 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:20.968 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:20.968 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:20.968 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:20.968 ++ SUPPORT_END=2024-11-12 00:02:20.968 ++ VARIANT='Cloud Edition' 00:02:20.968 ++ VARIANT_ID=cloud 00:02:20.968 + uname -a 00:02:20.968 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:20.968 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:20.968 Hugepages 00:02:20.968 node hugesize free / total 00:02:20.968 node0 1048576kB 0 / 0 00:02:20.968 node0 2048kB 0 / 0 00:02:20.968 00:02:20.968 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:20.968 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:20.968 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:20.968 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:20.968 + rm -f /tmp/spdk-ld-path 00:02:20.968 + source autorun-spdk.conf 00:02:20.968 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:20.968 ++ SPDK_TEST_NVMF=1 00:02:20.968 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:20.968 ++ SPDK_TEST_USDT=1 00:02:20.968 ++ SPDK_RUN_UBSAN=1 00:02:20.968 ++ SPDK_TEST_NVMF_MDNS=1 00:02:20.968 ++ NET_TYPE=virt 00:02:20.968 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:20.968 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:20.968 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:20.968 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:20.968 ++ RUN_NIGHTLY=1 00:02:20.968 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:20.968 + [[ -n '' ]] 00:02:20.968 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:21.227 + for M in /var/spdk/build-*-manifest.txt 00:02:21.227 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:21.227 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:21.227 + for M in /var/spdk/build-*-manifest.txt 00:02:21.227 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:21.227 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:21.227 + for M in /var/spdk/build-*-manifest.txt 00:02:21.227 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:21.227 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:21.227 ++ uname 00:02:21.227 + [[ Linux == \L\i\n\u\x ]] 00:02:21.227 + sudo dmesg -T 00:02:21.227 + sudo dmesg --clear 00:02:21.227 + dmesg_pid=5970 00:02:21.227 + [[ Fedora Linux == FreeBSD ]] 00:02:21.227 + sudo dmesg -Tw 00:02:21.227 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:21.227 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:21.227 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:21.227 + [[ -x /usr/src/fio-static/fio ]] 00:02:21.227 + export FIO_BIN=/usr/src/fio-static/fio 00:02:21.227 + FIO_BIN=/usr/src/fio-static/fio 00:02:21.227 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:21.227 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:21.227 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:21.227 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:21.227 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:21.227 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:21.227 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:21.227 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:21.227 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:21.227 Test configuration: 00:02:21.227 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:21.227 SPDK_TEST_NVMF=1 00:02:21.227 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:21.227 SPDK_TEST_USDT=1 00:02:21.227 SPDK_RUN_UBSAN=1 00:02:21.227 SPDK_TEST_NVMF_MDNS=1 00:02:21.227 NET_TYPE=virt 00:02:21.227 SPDK_JSONRPC_GO_CLIENT=1 00:02:21.227 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:21.227 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:21.227 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:21.227 RUN_NIGHTLY=1 19:23:08 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:02:21.227 19:23:08 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:21.227 19:23:08 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:21.227 19:23:08 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:21.227 19:23:08 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:21.227 19:23:08 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:21.227 19:23:08 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:21.227 19:23:08 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:21.227 19:23:08 -- paths/export.sh@5 -- $ export PATH 00:02:21.227 19:23:08 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:21.227 19:23:08 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:21.227 19:23:08 -- common/autobuild_common.sh@440 -- $ date +%s 00:02:21.227 19:23:08 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1734290588.XXXXXX 00:02:21.227 19:23:08 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1734290588.oeF0yi 00:02:21.227 19:23:08 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:02:21.227 19:23:08 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:02:21.227 19:23:08 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:21.227 19:23:08 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:21.227 19:23:08 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:21.227 19:23:08 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:21.227 19:23:08 -- common/autobuild_common.sh@456 -- $ get_config_params 00:02:21.227 19:23:08 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:02:21.227 19:23:08 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.227 19:23:08 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:02:21.227 19:23:08 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:21.227 19:23:08 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:21.227 19:23:08 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:21.227 19:23:08 -- spdk/autobuild.sh@16 -- $ date -u 00:02:21.227 Sun Dec 15 07:23:08 PM UTC 2024 00:02:21.227 19:23:08 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:21.486 LTS-67-gc13c99a5e 00:02:21.486 19:23:08 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:21.486 19:23:08 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:21.486 19:23:08 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:21.486 19:23:08 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:21.486 19:23:08 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:21.486 19:23:08 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.486 ************************************ 00:02:21.486 START TEST ubsan 00:02:21.486 ************************************ 00:02:21.486 using ubsan 00:02:21.486 19:23:08 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:02:21.486 00:02:21.487 real 0m0.000s 00:02:21.487 user 0m0.000s 00:02:21.487 sys 0m0.000s 00:02:21.487 19:23:08 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:21.487 ************************************ 00:02:21.487 END TEST ubsan 00:02:21.487 ************************************ 00:02:21.487 19:23:08 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.487 19:23:08 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:21.487 19:23:08 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:21.487 19:23:08 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:21.487 19:23:08 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:02:21.487 19:23:08 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:21.487 19:23:08 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.487 ************************************ 00:02:21.487 START TEST build_native_dpdk 00:02:21.487 ************************************ 00:02:21.487 19:23:08 -- common/autotest_common.sh@1114 -- $ _build_native_dpdk 00:02:21.487 19:23:08 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:21.487 19:23:08 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:21.487 19:23:08 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:21.487 19:23:08 -- common/autobuild_common.sh@51 -- $ local compiler 00:02:21.487 19:23:08 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:21.487 19:23:08 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:21.487 19:23:08 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:21.487 19:23:08 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:21.487 19:23:08 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:21.487 19:23:08 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:21.487 19:23:08 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:21.487 19:23:08 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:21.487 19:23:08 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:21.487 19:23:08 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:21.487 19:23:08 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:21.487 19:23:08 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:21.487 19:23:08 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:21.487 19:23:08 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:21.487 19:23:08 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:21.487 19:23:08 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:21.487 caf0f5d395 version: 22.11.4 00:02:21.487 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:21.487 dc9c799c7d vhost: fix missing spinlock unlock 00:02:21.487 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:21.487 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:21.487 19:23:08 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:21.487 19:23:08 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:21.487 19:23:08 -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:21.487 19:23:08 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:21.487 19:23:08 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:21.487 19:23:08 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:21.487 19:23:08 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:21.487 19:23:08 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:21.487 19:23:08 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:21.487 19:23:08 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:21.487 19:23:08 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:21.487 19:23:08 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:21.487 19:23:08 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:21.487 19:23:08 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:21.487 19:23:08 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:21.487 19:23:08 -- common/autobuild_common.sh@168 -- $ uname -s 00:02:21.487 19:23:08 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:21.487 19:23:08 -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:21.487 19:23:08 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:21.487 19:23:08 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:21.487 19:23:08 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:21.487 19:23:08 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:21.487 19:23:08 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:21.487 19:23:08 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:21.487 19:23:08 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:21.487 19:23:08 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:21.487 19:23:08 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:21.487 19:23:08 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:21.487 19:23:08 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:21.487 19:23:08 -- scripts/common.sh@343 -- $ case "$op" in 00:02:21.487 19:23:08 -- scripts/common.sh@344 -- $ : 1 00:02:21.487 19:23:08 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:21.487 19:23:08 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:21.487 19:23:08 -- scripts/common.sh@364 -- $ decimal 22 00:02:21.487 19:23:08 -- scripts/common.sh@352 -- $ local d=22 00:02:21.487 19:23:08 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:21.487 19:23:08 -- scripts/common.sh@354 -- $ echo 22 00:02:21.487 19:23:08 -- scripts/common.sh@364 -- $ ver1[v]=22 00:02:21.487 19:23:08 -- scripts/common.sh@365 -- $ decimal 21 00:02:21.487 19:23:08 -- scripts/common.sh@352 -- $ local d=21 00:02:21.487 19:23:08 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:21.487 19:23:08 -- scripts/common.sh@354 -- $ echo 21 00:02:21.487 19:23:08 -- scripts/common.sh@365 -- $ ver2[v]=21 00:02:21.487 19:23:08 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:21.487 19:23:08 -- scripts/common.sh@366 -- $ return 1 00:02:21.487 19:23:08 -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:21.487 patching file config/rte_config.h 00:02:21.487 Hunk #1 succeeded at 60 (offset 1 line). 00:02:21.487 19:23:08 -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:02:21.487 19:23:08 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:21.487 19:23:08 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:21.487 19:23:08 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:21.487 19:23:08 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:21.487 19:23:08 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:21.487 19:23:08 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:21.487 19:23:08 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:21.487 19:23:08 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:21.487 19:23:08 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:21.487 19:23:08 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:21.487 19:23:08 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:21.487 19:23:08 -- scripts/common.sh@343 -- $ case "$op" in 00:02:21.487 19:23:08 -- scripts/common.sh@344 -- $ : 1 00:02:21.487 19:23:08 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:21.487 19:23:08 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:21.487 19:23:08 -- scripts/common.sh@364 -- $ decimal 22 00:02:21.487 19:23:08 -- scripts/common.sh@352 -- $ local d=22 00:02:21.487 19:23:08 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:21.487 19:23:08 -- scripts/common.sh@354 -- $ echo 22 00:02:21.487 19:23:08 -- scripts/common.sh@364 -- $ ver1[v]=22 00:02:21.487 19:23:08 -- scripts/common.sh@365 -- $ decimal 24 00:02:21.487 19:23:08 -- scripts/common.sh@352 -- $ local d=24 00:02:21.487 19:23:08 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:21.487 19:23:08 -- scripts/common.sh@354 -- $ echo 24 00:02:21.487 19:23:08 -- scripts/common.sh@365 -- $ ver2[v]=24 00:02:21.487 19:23:08 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:21.487 19:23:08 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:02:21.487 19:23:08 -- scripts/common.sh@367 -- $ return 0 00:02:21.487 19:23:08 -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:21.487 patching file lib/pcapng/rte_pcapng.c 00:02:21.487 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:21.487 19:23:08 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:02:21.487 19:23:08 -- common/autobuild_common.sh@181 -- $ uname -s 00:02:21.487 19:23:08 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:02:21.487 19:23:08 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:21.487 19:23:08 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:26.756 The Meson build system 00:02:26.756 Version: 1.5.0 00:02:26.756 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:26.756 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:26.756 Build type: native build 00:02:26.756 Program cat found: YES (/usr/bin/cat) 00:02:26.756 Project name: DPDK 00:02:26.756 Project version: 22.11.4 00:02:26.756 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:26.756 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:26.756 Host machine cpu family: x86_64 00:02:26.756 Host machine cpu: x86_64 00:02:26.756 Message: ## Building in Developer Mode ## 00:02:26.756 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:26.756 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:26.756 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:26.756 Program objdump found: YES (/usr/bin/objdump) 00:02:26.756 Program python3 found: YES (/usr/bin/python3) 00:02:26.756 Program cat found: YES (/usr/bin/cat) 00:02:26.756 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:26.756 Checking for size of "void *" : 8 00:02:26.756 Checking for size of "void *" : 8 (cached) 00:02:26.756 Library m found: YES 00:02:26.756 Library numa found: YES 00:02:26.756 Has header "numaif.h" : YES 00:02:26.756 Library fdt found: NO 00:02:26.756 Library execinfo found: NO 00:02:26.756 Has header "execinfo.h" : YES 00:02:26.756 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:26.756 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:26.756 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:26.756 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:26.756 Run-time dependency openssl found: YES 3.1.1 00:02:26.756 Run-time dependency libpcap found: YES 1.10.4 00:02:26.756 Has header "pcap.h" with dependency libpcap: YES 00:02:26.756 Compiler for C supports arguments -Wcast-qual: YES 00:02:26.756 Compiler for C supports arguments -Wdeprecated: YES 00:02:26.756 Compiler for C supports arguments -Wformat: YES 00:02:26.756 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:26.756 Compiler for C supports arguments -Wformat-security: NO 00:02:26.756 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:26.756 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:26.756 Compiler for C supports arguments -Wnested-externs: YES 00:02:26.756 Compiler for C supports arguments -Wold-style-definition: YES 00:02:26.756 Compiler for C supports arguments -Wpointer-arith: YES 00:02:26.756 Compiler for C supports arguments -Wsign-compare: YES 00:02:26.756 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:26.756 Compiler for C supports arguments -Wundef: YES 00:02:26.756 Compiler for C supports arguments -Wwrite-strings: YES 00:02:26.756 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:26.756 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:26.756 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:26.756 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:26.756 Compiler for C supports arguments -mavx512f: YES 00:02:26.756 Checking if "AVX512 checking" compiles: YES 00:02:26.756 Fetching value of define "__SSE4_2__" : 1 00:02:26.757 Fetching value of define "__AES__" : 1 00:02:26.757 Fetching value of define "__AVX__" : 1 00:02:26.757 Fetching value of define "__AVX2__" : 1 00:02:26.757 Fetching value of define "__AVX512BW__" : (undefined) 00:02:26.757 Fetching value of define "__AVX512CD__" : (undefined) 00:02:26.757 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:26.757 Fetching value of define "__AVX512F__" : (undefined) 00:02:26.757 Fetching value of define "__AVX512VL__" : (undefined) 00:02:26.757 Fetching value of define "__PCLMUL__" : 1 00:02:26.757 Fetching value of define "__RDRND__" : 1 00:02:26.757 Fetching value of define "__RDSEED__" : 1 00:02:26.757 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:26.757 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:26.757 Message: lib/kvargs: Defining dependency "kvargs" 00:02:26.757 Message: lib/telemetry: Defining dependency "telemetry" 00:02:26.757 Checking for function "getentropy" : YES 00:02:26.757 Message: lib/eal: Defining dependency "eal" 00:02:26.757 Message: lib/ring: Defining dependency "ring" 00:02:26.757 Message: lib/rcu: Defining dependency "rcu" 00:02:26.757 Message: lib/mempool: Defining dependency "mempool" 00:02:26.757 Message: lib/mbuf: Defining dependency "mbuf" 00:02:26.757 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:26.757 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:26.757 Compiler for C supports arguments -mpclmul: YES 00:02:26.757 Compiler for C supports arguments -maes: YES 00:02:26.757 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:26.757 Compiler for C supports arguments -mavx512bw: YES 00:02:26.757 Compiler for C supports arguments -mavx512dq: YES 00:02:26.757 Compiler for C supports arguments -mavx512vl: YES 00:02:26.757 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:26.757 Compiler for C supports arguments -mavx2: YES 00:02:26.757 Compiler for C supports arguments -mavx: YES 00:02:26.757 Message: lib/net: Defining dependency "net" 00:02:26.757 Message: lib/meter: Defining dependency "meter" 00:02:26.757 Message: lib/ethdev: Defining dependency "ethdev" 00:02:26.757 Message: lib/pci: Defining dependency "pci" 00:02:26.757 Message: lib/cmdline: Defining dependency "cmdline" 00:02:26.757 Message: lib/metrics: Defining dependency "metrics" 00:02:26.757 Message: lib/hash: Defining dependency "hash" 00:02:26.757 Message: lib/timer: Defining dependency "timer" 00:02:26.757 Fetching value of define "__AVX2__" : 1 (cached) 00:02:26.757 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:26.757 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:26.757 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:26.757 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:26.757 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:26.757 Message: lib/acl: Defining dependency "acl" 00:02:26.757 Message: lib/bbdev: Defining dependency "bbdev" 00:02:26.757 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:26.757 Run-time dependency libelf found: YES 0.191 00:02:26.757 Message: lib/bpf: Defining dependency "bpf" 00:02:26.757 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:26.757 Message: lib/compressdev: Defining dependency "compressdev" 00:02:26.757 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:26.757 Message: lib/distributor: Defining dependency "distributor" 00:02:26.757 Message: lib/efd: Defining dependency "efd" 00:02:26.757 Message: lib/eventdev: Defining dependency "eventdev" 00:02:26.757 Message: lib/gpudev: Defining dependency "gpudev" 00:02:26.757 Message: lib/gro: Defining dependency "gro" 00:02:26.757 Message: lib/gso: Defining dependency "gso" 00:02:26.757 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:26.757 Message: lib/jobstats: Defining dependency "jobstats" 00:02:26.757 Message: lib/latencystats: Defining dependency "latencystats" 00:02:26.757 Message: lib/lpm: Defining dependency "lpm" 00:02:26.757 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:26.757 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:26.757 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:26.757 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:26.757 Message: lib/member: Defining dependency "member" 00:02:26.757 Message: lib/pcapng: Defining dependency "pcapng" 00:02:26.757 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:26.757 Message: lib/power: Defining dependency "power" 00:02:26.757 Message: lib/rawdev: Defining dependency "rawdev" 00:02:26.757 Message: lib/regexdev: Defining dependency "regexdev" 00:02:26.757 Message: lib/dmadev: Defining dependency "dmadev" 00:02:26.757 Message: lib/rib: Defining dependency "rib" 00:02:26.757 Message: lib/reorder: Defining dependency "reorder" 00:02:26.757 Message: lib/sched: Defining dependency "sched" 00:02:26.757 Message: lib/security: Defining dependency "security" 00:02:26.757 Message: lib/stack: Defining dependency "stack" 00:02:26.757 Has header "linux/userfaultfd.h" : YES 00:02:26.757 Message: lib/vhost: Defining dependency "vhost" 00:02:26.757 Message: lib/ipsec: Defining dependency "ipsec" 00:02:26.757 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:26.757 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:26.757 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:26.757 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:26.757 Message: lib/fib: Defining dependency "fib" 00:02:26.757 Message: lib/port: Defining dependency "port" 00:02:26.757 Message: lib/pdump: Defining dependency "pdump" 00:02:26.757 Message: lib/table: Defining dependency "table" 00:02:26.757 Message: lib/pipeline: Defining dependency "pipeline" 00:02:26.757 Message: lib/graph: Defining dependency "graph" 00:02:26.757 Message: lib/node: Defining dependency "node" 00:02:26.757 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:26.757 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:26.757 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:26.757 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:26.757 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:26.757 Compiler for C supports arguments -Wno-unused-value: YES 00:02:26.757 Compiler for C supports arguments -Wno-format: YES 00:02:26.757 Compiler for C supports arguments -Wno-format-security: YES 00:02:26.757 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:28.164 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:28.164 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:28.164 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:28.164 Fetching value of define "__AVX2__" : 1 (cached) 00:02:28.164 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:28.164 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:28.164 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:28.164 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:28.164 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:28.164 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:28.164 Configuring doxy-api.conf using configuration 00:02:28.164 Program sphinx-build found: NO 00:02:28.164 Configuring rte_build_config.h using configuration 00:02:28.164 Message: 00:02:28.164 ================= 00:02:28.164 Applications Enabled 00:02:28.164 ================= 00:02:28.164 00:02:28.164 apps: 00:02:28.164 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:28.164 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:28.164 test-security-perf, 00:02:28.164 00:02:28.165 Message: 00:02:28.165 ================= 00:02:28.165 Libraries Enabled 00:02:28.165 ================= 00:02:28.165 00:02:28.165 libs: 00:02:28.165 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:28.165 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:28.165 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:28.165 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:28.165 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:28.165 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:28.165 table, pipeline, graph, node, 00:02:28.165 00:02:28.165 Message: 00:02:28.165 =============== 00:02:28.165 Drivers Enabled 00:02:28.165 =============== 00:02:28.165 00:02:28.165 common: 00:02:28.165 00:02:28.165 bus: 00:02:28.165 pci, vdev, 00:02:28.165 mempool: 00:02:28.165 ring, 00:02:28.165 dma: 00:02:28.165 00:02:28.165 net: 00:02:28.165 i40e, 00:02:28.165 raw: 00:02:28.165 00:02:28.165 crypto: 00:02:28.165 00:02:28.165 compress: 00:02:28.165 00:02:28.165 regex: 00:02:28.165 00:02:28.165 vdpa: 00:02:28.165 00:02:28.165 event: 00:02:28.165 00:02:28.165 baseband: 00:02:28.165 00:02:28.165 gpu: 00:02:28.165 00:02:28.165 00:02:28.165 Message: 00:02:28.165 ================= 00:02:28.165 Content Skipped 00:02:28.165 ================= 00:02:28.165 00:02:28.165 apps: 00:02:28.165 00:02:28.165 libs: 00:02:28.165 kni: explicitly disabled via build config (deprecated lib) 00:02:28.165 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:28.165 00:02:28.165 drivers: 00:02:28.165 common/cpt: not in enabled drivers build config 00:02:28.165 common/dpaax: not in enabled drivers build config 00:02:28.165 common/iavf: not in enabled drivers build config 00:02:28.165 common/idpf: not in enabled drivers build config 00:02:28.165 common/mvep: not in enabled drivers build config 00:02:28.165 common/octeontx: not in enabled drivers build config 00:02:28.165 bus/auxiliary: not in enabled drivers build config 00:02:28.165 bus/dpaa: not in enabled drivers build config 00:02:28.165 bus/fslmc: not in enabled drivers build config 00:02:28.165 bus/ifpga: not in enabled drivers build config 00:02:28.165 bus/vmbus: not in enabled drivers build config 00:02:28.165 common/cnxk: not in enabled drivers build config 00:02:28.165 common/mlx5: not in enabled drivers build config 00:02:28.165 common/qat: not in enabled drivers build config 00:02:28.165 common/sfc_efx: not in enabled drivers build config 00:02:28.165 mempool/bucket: not in enabled drivers build config 00:02:28.165 mempool/cnxk: not in enabled drivers build config 00:02:28.165 mempool/dpaa: not in enabled drivers build config 00:02:28.165 mempool/dpaa2: not in enabled drivers build config 00:02:28.165 mempool/octeontx: not in enabled drivers build config 00:02:28.165 mempool/stack: not in enabled drivers build config 00:02:28.165 dma/cnxk: not in enabled drivers build config 00:02:28.165 dma/dpaa: not in enabled drivers build config 00:02:28.165 dma/dpaa2: not in enabled drivers build config 00:02:28.165 dma/hisilicon: not in enabled drivers build config 00:02:28.165 dma/idxd: not in enabled drivers build config 00:02:28.165 dma/ioat: not in enabled drivers build config 00:02:28.165 dma/skeleton: not in enabled drivers build config 00:02:28.165 net/af_packet: not in enabled drivers build config 00:02:28.165 net/af_xdp: not in enabled drivers build config 00:02:28.165 net/ark: not in enabled drivers build config 00:02:28.165 net/atlantic: not in enabled drivers build config 00:02:28.165 net/avp: not in enabled drivers build config 00:02:28.165 net/axgbe: not in enabled drivers build config 00:02:28.165 net/bnx2x: not in enabled drivers build config 00:02:28.165 net/bnxt: not in enabled drivers build config 00:02:28.165 net/bonding: not in enabled drivers build config 00:02:28.165 net/cnxk: not in enabled drivers build config 00:02:28.165 net/cxgbe: not in enabled drivers build config 00:02:28.165 net/dpaa: not in enabled drivers build config 00:02:28.165 net/dpaa2: not in enabled drivers build config 00:02:28.165 net/e1000: not in enabled drivers build config 00:02:28.165 net/ena: not in enabled drivers build config 00:02:28.165 net/enetc: not in enabled drivers build config 00:02:28.165 net/enetfec: not in enabled drivers build config 00:02:28.165 net/enic: not in enabled drivers build config 00:02:28.165 net/failsafe: not in enabled drivers build config 00:02:28.165 net/fm10k: not in enabled drivers build config 00:02:28.165 net/gve: not in enabled drivers build config 00:02:28.165 net/hinic: not in enabled drivers build config 00:02:28.165 net/hns3: not in enabled drivers build config 00:02:28.165 net/iavf: not in enabled drivers build config 00:02:28.165 net/ice: not in enabled drivers build config 00:02:28.165 net/idpf: not in enabled drivers build config 00:02:28.165 net/igc: not in enabled drivers build config 00:02:28.165 net/ionic: not in enabled drivers build config 00:02:28.165 net/ipn3ke: not in enabled drivers build config 00:02:28.165 net/ixgbe: not in enabled drivers build config 00:02:28.165 net/kni: not in enabled drivers build config 00:02:28.165 net/liquidio: not in enabled drivers build config 00:02:28.165 net/mana: not in enabled drivers build config 00:02:28.165 net/memif: not in enabled drivers build config 00:02:28.165 net/mlx4: not in enabled drivers build config 00:02:28.165 net/mlx5: not in enabled drivers build config 00:02:28.165 net/mvneta: not in enabled drivers build config 00:02:28.165 net/mvpp2: not in enabled drivers build config 00:02:28.165 net/netvsc: not in enabled drivers build config 00:02:28.165 net/nfb: not in enabled drivers build config 00:02:28.165 net/nfp: not in enabled drivers build config 00:02:28.165 net/ngbe: not in enabled drivers build config 00:02:28.165 net/null: not in enabled drivers build config 00:02:28.165 net/octeontx: not in enabled drivers build config 00:02:28.165 net/octeon_ep: not in enabled drivers build config 00:02:28.165 net/pcap: not in enabled drivers build config 00:02:28.165 net/pfe: not in enabled drivers build config 00:02:28.165 net/qede: not in enabled drivers build config 00:02:28.165 net/ring: not in enabled drivers build config 00:02:28.165 net/sfc: not in enabled drivers build config 00:02:28.165 net/softnic: not in enabled drivers build config 00:02:28.165 net/tap: not in enabled drivers build config 00:02:28.165 net/thunderx: not in enabled drivers build config 00:02:28.165 net/txgbe: not in enabled drivers build config 00:02:28.165 net/vdev_netvsc: not in enabled drivers build config 00:02:28.165 net/vhost: not in enabled drivers build config 00:02:28.165 net/virtio: not in enabled drivers build config 00:02:28.165 net/vmxnet3: not in enabled drivers build config 00:02:28.165 raw/cnxk_bphy: not in enabled drivers build config 00:02:28.165 raw/cnxk_gpio: not in enabled drivers build config 00:02:28.165 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:28.165 raw/ifpga: not in enabled drivers build config 00:02:28.165 raw/ntb: not in enabled drivers build config 00:02:28.165 raw/skeleton: not in enabled drivers build config 00:02:28.165 crypto/armv8: not in enabled drivers build config 00:02:28.165 crypto/bcmfs: not in enabled drivers build config 00:02:28.165 crypto/caam_jr: not in enabled drivers build config 00:02:28.165 crypto/ccp: not in enabled drivers build config 00:02:28.165 crypto/cnxk: not in enabled drivers build config 00:02:28.165 crypto/dpaa_sec: not in enabled drivers build config 00:02:28.165 crypto/dpaa2_sec: not in enabled drivers build config 00:02:28.165 crypto/ipsec_mb: not in enabled drivers build config 00:02:28.165 crypto/mlx5: not in enabled drivers build config 00:02:28.165 crypto/mvsam: not in enabled drivers build config 00:02:28.165 crypto/nitrox: not in enabled drivers build config 00:02:28.165 crypto/null: not in enabled drivers build config 00:02:28.165 crypto/octeontx: not in enabled drivers build config 00:02:28.165 crypto/openssl: not in enabled drivers build config 00:02:28.165 crypto/scheduler: not in enabled drivers build config 00:02:28.165 crypto/uadk: not in enabled drivers build config 00:02:28.165 crypto/virtio: not in enabled drivers build config 00:02:28.165 compress/isal: not in enabled drivers build config 00:02:28.165 compress/mlx5: not in enabled drivers build config 00:02:28.165 compress/octeontx: not in enabled drivers build config 00:02:28.165 compress/zlib: not in enabled drivers build config 00:02:28.165 regex/mlx5: not in enabled drivers build config 00:02:28.165 regex/cn9k: not in enabled drivers build config 00:02:28.165 vdpa/ifc: not in enabled drivers build config 00:02:28.165 vdpa/mlx5: not in enabled drivers build config 00:02:28.165 vdpa/sfc: not in enabled drivers build config 00:02:28.165 event/cnxk: not in enabled drivers build config 00:02:28.165 event/dlb2: not in enabled drivers build config 00:02:28.165 event/dpaa: not in enabled drivers build config 00:02:28.165 event/dpaa2: not in enabled drivers build config 00:02:28.165 event/dsw: not in enabled drivers build config 00:02:28.165 event/opdl: not in enabled drivers build config 00:02:28.165 event/skeleton: not in enabled drivers build config 00:02:28.165 event/sw: not in enabled drivers build config 00:02:28.165 event/octeontx: not in enabled drivers build config 00:02:28.165 baseband/acc: not in enabled drivers build config 00:02:28.165 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:28.165 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:28.165 baseband/la12xx: not in enabled drivers build config 00:02:28.165 baseband/null: not in enabled drivers build config 00:02:28.165 baseband/turbo_sw: not in enabled drivers build config 00:02:28.165 gpu/cuda: not in enabled drivers build config 00:02:28.165 00:02:28.165 00:02:28.165 Build targets in project: 314 00:02:28.166 00:02:28.166 DPDK 22.11.4 00:02:28.166 00:02:28.166 User defined options 00:02:28.166 libdir : lib 00:02:28.166 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:28.166 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:28.166 c_link_args : 00:02:28.166 enable_docs : false 00:02:28.166 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:28.166 enable_kmods : false 00:02:28.166 machine : native 00:02:28.166 tests : false 00:02:28.166 00:02:28.166 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:28.166 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:28.423 19:23:15 -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:28.423 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:28.423 [1/743] Generating lib/rte_kvargs_mingw with a custom command 00:02:28.423 [2/743] Generating lib/rte_telemetry_mingw with a custom command 00:02:28.423 [3/743] Generating lib/rte_telemetry_def with a custom command 00:02:28.423 [4/743] Generating lib/rte_kvargs_def with a custom command 00:02:28.423 [5/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:28.423 [6/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:28.423 [7/743] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:28.423 [8/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:28.423 [9/743] Linking static target lib/librte_kvargs.a 00:02:28.423 [10/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:28.423 [11/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:28.680 [12/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:28.680 [13/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:28.680 [14/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:28.680 [15/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:28.680 [16/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:28.680 [17/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:28.680 [18/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:28.680 [19/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:28.680 [20/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:28.680 [21/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:28.680 [22/743] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.938 [23/743] Linking target lib/librte_kvargs.so.23.0 00:02:28.938 [24/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:28.938 [25/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:28.938 [26/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:28.938 [27/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:28.938 [28/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:28.938 [29/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:28.938 [30/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:28.938 [31/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:28.938 [32/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:28.938 [33/743] Linking static target lib/librte_telemetry.a 00:02:28.938 [34/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:29.197 [35/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:29.197 [36/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:29.197 [37/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:29.197 [38/743] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:29.197 [39/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:29.197 [40/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:29.197 [41/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:29.455 [42/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:29.455 [43/743] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.455 [44/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:29.455 [45/743] Linking target lib/librte_telemetry.so.23.0 00:02:29.455 [46/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:29.455 [47/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:29.455 [48/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:29.455 [49/743] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:29.455 [50/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:29.455 [51/743] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:29.455 [52/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:29.455 [53/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:29.713 [54/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:29.713 [55/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:29.713 [56/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:29.713 [57/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:29.713 [58/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:29.713 [59/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:29.713 [60/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:29.713 [61/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:29.713 [62/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:29.713 [63/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:29.713 [64/743] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:29.713 [65/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:29.713 [66/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:29.713 [67/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:29.713 [68/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:29.972 [69/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:29.972 [70/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:29.972 [71/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:29.972 [72/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:29.972 [73/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:29.972 [74/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:29.972 [75/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:29.972 [76/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:29.972 [77/743] Generating lib/rte_eal_def with a custom command 00:02:29.972 [78/743] Generating lib/rte_eal_mingw with a custom command 00:02:29.972 [79/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:29.972 [80/743] Generating lib/rte_ring_def with a custom command 00:02:29.972 [81/743] Generating lib/rte_ring_mingw with a custom command 00:02:29.972 [82/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:29.972 [83/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:29.972 [84/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:29.972 [85/743] Generating lib/rte_rcu_def with a custom command 00:02:29.972 [86/743] Generating lib/rte_rcu_mingw with a custom command 00:02:30.230 [87/743] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:30.230 [88/743] Linking static target lib/librte_ring.a 00:02:30.230 [89/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:30.230 [90/743] Generating lib/rte_mempool_def with a custom command 00:02:30.230 [91/743] Generating lib/rte_mempool_mingw with a custom command 00:02:30.230 [92/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:30.488 [93/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:30.488 [94/743] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.488 [95/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:30.488 [96/743] Linking static target lib/librte_eal.a 00:02:30.747 [97/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:30.747 [98/743] Generating lib/rte_mbuf_def with a custom command 00:02:30.747 [99/743] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:30.747 [100/743] Generating lib/rte_mbuf_mingw with a custom command 00:02:30.747 [101/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:31.005 [102/743] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:31.005 [103/743] Linking static target lib/librte_rcu.a 00:02:31.005 [104/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:31.005 [105/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:31.263 [106/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:31.263 [107/743] Linking static target lib/librte_mempool.a 00:02:31.263 [108/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:31.263 [109/743] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.263 [110/743] Generating lib/rte_net_def with a custom command 00:02:31.263 [111/743] Generating lib/rte_net_mingw with a custom command 00:02:31.263 [112/743] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:31.263 [113/743] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:31.263 [114/743] Generating lib/rte_meter_def with a custom command 00:02:31.522 [115/743] Generating lib/rte_meter_mingw with a custom command 00:02:31.522 [116/743] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:31.522 [117/743] Linking static target lib/librte_meter.a 00:02:31.522 [118/743] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:31.522 [119/743] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:31.522 [120/743] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:31.780 [121/743] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.780 [122/743] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:31.780 [123/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:31.780 [124/743] Linking static target lib/librte_mbuf.a 00:02:31.780 [125/743] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:31.780 [126/743] Linking static target lib/librte_net.a 00:02:32.038 [127/743] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.038 [128/743] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.297 [129/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:32.297 [130/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:32.297 [131/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:32.297 [132/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:32.297 [133/743] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.297 [134/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:32.555 [135/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:33.122 [136/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:33.122 [137/743] Generating lib/rte_ethdev_def with a custom command 00:02:33.122 [138/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:33.122 [139/743] Generating lib/rte_ethdev_mingw with a custom command 00:02:33.122 [140/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:33.122 [141/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:33.122 [142/743] Generating lib/rte_pci_def with a custom command 00:02:33.122 [143/743] Generating lib/rte_pci_mingw with a custom command 00:02:33.122 [144/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:33.122 [145/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:33.122 [146/743] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:33.122 [147/743] Linking static target lib/librte_pci.a 00:02:33.122 [148/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:33.122 [149/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:33.380 [150/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:33.380 [151/743] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.380 [152/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:33.380 [153/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:33.380 [154/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:33.380 [155/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:33.380 [156/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:33.380 [157/743] Generating lib/rte_cmdline_def with a custom command 00:02:33.380 [158/743] Generating lib/rte_cmdline_mingw with a custom command 00:02:33.380 [159/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:33.380 [160/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:33.380 [161/743] Generating lib/rte_metrics_def with a custom command 00:02:33.380 [162/743] Generating lib/rte_metrics_mingw with a custom command 00:02:33.639 [163/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:33.639 [164/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:33.639 [165/743] Generating lib/rte_hash_def with a custom command 00:02:33.639 [166/743] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:33.639 [167/743] Generating lib/rte_hash_mingw with a custom command 00:02:33.639 [168/743] Generating lib/rte_timer_def with a custom command 00:02:33.639 [169/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:33.639 [170/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:33.639 [171/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:33.639 [172/743] Linking static target lib/librte_cmdline.a 00:02:33.639 [173/743] Generating lib/rte_timer_mingw with a custom command 00:02:34.206 [174/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:34.206 [175/743] Linking static target lib/librte_metrics.a 00:02:34.206 [176/743] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:34.206 [177/743] Linking static target lib/librte_timer.a 00:02:34.464 [178/743] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.464 [179/743] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.464 [180/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:34.723 [181/743] Linking static target lib/librte_ethdev.a 00:02:34.723 [182/743] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:34.723 [183/743] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:34.723 [184/743] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.290 [185/743] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:35.290 [186/743] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:35.290 [187/743] Generating lib/rte_acl_def with a custom command 00:02:35.290 [188/743] Generating lib/rte_acl_mingw with a custom command 00:02:35.290 [189/743] Generating lib/rte_bbdev_def with a custom command 00:02:35.290 [190/743] Generating lib/rte_bbdev_mingw with a custom command 00:02:35.290 [191/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:35.290 [192/743] Generating lib/rte_bitratestats_def with a custom command 00:02:35.548 [193/743] Generating lib/rte_bitratestats_mingw with a custom command 00:02:35.548 [194/743] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:36.115 [195/743] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:36.115 [196/743] Linking static target lib/librte_bitratestats.a 00:02:36.115 [197/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:36.115 [198/743] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.115 [199/743] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:36.115 [200/743] Linking static target lib/librte_bbdev.a 00:02:36.373 [201/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:36.373 [202/743] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:36.373 [203/743] Linking static target lib/librte_hash.a 00:02:36.632 [204/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:36.632 [205/743] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:36.632 [206/743] Linking static target lib/acl/libavx512_tmp.a 00:02:36.890 [207/743] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.890 [208/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:36.890 [209/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:37.149 [210/743] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.149 [211/743] Generating lib/rte_bpf_def with a custom command 00:02:37.149 [212/743] Generating lib/rte_bpf_mingw with a custom command 00:02:37.149 [213/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:37.149 [214/743] Generating lib/rte_cfgfile_def with a custom command 00:02:37.149 [215/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:37.407 [216/743] Generating lib/rte_cfgfile_mingw with a custom command 00:02:37.407 [217/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:37.407 [218/743] Linking static target lib/librte_acl.a 00:02:37.407 [219/743] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:37.408 [220/743] Linking static target lib/librte_cfgfile.a 00:02:37.408 [221/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:37.666 [222/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:37.666 [223/743] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.666 [224/743] Generating lib/rte_compressdev_def with a custom command 00:02:37.666 [225/743] Generating lib/rte_compressdev_mingw with a custom command 00:02:37.666 [226/743] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.666 [227/743] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.953 [228/743] Linking target lib/librte_eal.so.23.0 00:02:37.953 [229/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:37.953 [230/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:37.953 [231/743] Generating lib/rte_cryptodev_mingw with a custom command 00:02:37.953 [232/743] Generating lib/rte_cryptodev_def with a custom command 00:02:37.953 [233/743] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:37.953 [234/743] Linking target lib/librte_ring.so.23.0 00:02:37.953 [235/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:38.235 [236/743] Linking target lib/librte_meter.so.23.0 00:02:38.235 [237/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:38.235 [238/743] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:38.235 [239/743] Linking target lib/librte_pci.so.23.0 00:02:38.235 [240/743] Linking target lib/librte_rcu.so.23.0 00:02:38.235 [241/743] Linking target lib/librte_mempool.so.23.0 00:02:38.235 [242/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:38.235 [243/743] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:38.235 [244/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:38.235 [245/743] Linking target lib/librte_timer.so.23.0 00:02:38.235 [246/743] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:38.235 [247/743] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:38.235 [248/743] Linking static target lib/librte_bpf.a 00:02:38.235 [249/743] Linking target lib/librte_acl.so.23.0 00:02:38.235 [250/743] Linking target lib/librte_mbuf.so.23.0 00:02:38.493 [251/743] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:38.493 [252/743] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:38.493 [253/743] Linking static target lib/librte_compressdev.a 00:02:38.493 [254/743] Linking target lib/librte_cfgfile.so.23.0 00:02:38.493 [255/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:38.493 [256/743] Generating lib/rte_distributor_def with a custom command 00:02:38.493 [257/743] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:38.493 [258/743] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:38.493 [259/743] Generating lib/rte_distributor_mingw with a custom command 00:02:38.493 [260/743] Linking target lib/librte_net.so.23.0 00:02:38.493 [261/743] Linking target lib/librte_bbdev.so.23.0 00:02:38.493 [262/743] Generating lib/rte_efd_def with a custom command 00:02:38.493 [263/743] Generating lib/rte_efd_mingw with a custom command 00:02:38.751 [264/743] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.751 [265/743] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:38.751 [266/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:38.751 [267/743] Linking target lib/librte_cmdline.so.23.0 00:02:38.751 [268/743] Linking target lib/librte_hash.so.23.0 00:02:38.751 [269/743] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:39.009 [270/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:39.009 [271/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:39.009 [272/743] Linking static target lib/librte_distributor.a 00:02:39.009 [273/743] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.268 [274/743] Linking target lib/librte_ethdev.so.23.0 00:02:39.268 [275/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:39.268 [276/743] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.268 [277/743] Linking target lib/librte_distributor.so.23.0 00:02:39.268 [278/743] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.268 [279/743] Linking target lib/librte_compressdev.so.23.0 00:02:39.268 [280/743] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:39.268 [281/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:39.268 [282/743] Linking target lib/librte_metrics.so.23.0 00:02:39.526 [283/743] Linking target lib/librte_bpf.so.23.0 00:02:39.526 [284/743] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:39.526 [285/743] Linking target lib/librte_bitratestats.so.23.0 00:02:39.526 [286/743] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:39.526 [287/743] Generating lib/rte_eventdev_def with a custom command 00:02:39.526 [288/743] Generating lib/rte_eventdev_mingw with a custom command 00:02:39.526 [289/743] Generating lib/rte_gpudev_def with a custom command 00:02:39.526 [290/743] Generating lib/rte_gpudev_mingw with a custom command 00:02:39.785 [291/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:40.043 [292/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:40.043 [293/743] Linking static target lib/librte_cryptodev.a 00:02:40.043 [294/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:40.043 [295/743] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:40.043 [296/743] Linking static target lib/librte_efd.a 00:02:40.302 [297/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:40.302 [298/743] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.302 [299/743] Linking target lib/librte_efd.so.23.0 00:02:40.302 [300/743] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:40.302 [301/743] Generating lib/rte_gro_def with a custom command 00:02:40.302 [302/743] Generating lib/rte_gro_mingw with a custom command 00:02:40.561 [303/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:40.561 [304/743] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:40.561 [305/743] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:40.561 [306/743] Linking static target lib/librte_gpudev.a 00:02:40.820 [307/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:40.820 [308/743] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:41.079 [309/743] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:41.079 [310/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:41.079 [311/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:41.079 [312/743] Generating lib/rte_gso_def with a custom command 00:02:41.079 [313/743] Linking static target lib/librte_gro.a 00:02:41.079 [314/743] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:41.079 [315/743] Generating lib/rte_gso_mingw with a custom command 00:02:41.338 [316/743] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:41.338 [317/743] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.338 [318/743] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.338 [319/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:41.338 [320/743] Linking target lib/librte_gro.so.23.0 00:02:41.338 [321/743] Linking target lib/librte_gpudev.so.23.0 00:02:41.338 [322/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:41.338 [323/743] Generating lib/rte_ip_frag_def with a custom command 00:02:41.338 [324/743] Generating lib/rte_ip_frag_mingw with a custom command 00:02:41.596 [325/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:41.597 [326/743] Linking static target lib/librte_eventdev.a 00:02:41.597 [327/743] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:41.597 [328/743] Linking static target lib/librte_jobstats.a 00:02:41.597 [329/743] Generating lib/rte_jobstats_def with a custom command 00:02:41.597 [330/743] Generating lib/rte_jobstats_mingw with a custom command 00:02:41.597 [331/743] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:41.597 [332/743] Linking static target lib/librte_gso.a 00:02:41.856 [333/743] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.856 [334/743] Linking target lib/librte_gso.so.23.0 00:02:41.856 [335/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:41.856 [336/743] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.856 [337/743] Generating lib/rte_latencystats_def with a custom command 00:02:41.856 [338/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:41.856 [339/743] Generating lib/rte_latencystats_mingw with a custom command 00:02:42.115 [340/743] Linking target lib/librte_jobstats.so.23.0 00:02:42.115 [341/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:42.115 [342/743] Generating lib/rte_lpm_def with a custom command 00:02:42.115 [343/743] Generating lib/rte_lpm_mingw with a custom command 00:02:42.115 [344/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:42.115 [345/743] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.115 [346/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:42.115 [347/743] Linking target lib/librte_cryptodev.so.23.0 00:02:42.115 [348/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:42.115 [349/743] Linking static target lib/librte_ip_frag.a 00:02:42.374 [350/743] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:42.374 [351/743] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.633 [352/743] Linking target lib/librte_ip_frag.so.23.0 00:02:42.633 [353/743] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:42.633 [354/743] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:42.633 [355/743] Linking static target lib/librte_latencystats.a 00:02:42.633 [356/743] Generating lib/rte_member_def with a custom command 00:02:42.633 [357/743] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:42.633 [358/743] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:42.633 [359/743] Generating lib/rte_member_mingw with a custom command 00:02:42.633 [360/743] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:42.633 [361/743] Generating lib/rte_pcapng_def with a custom command 00:02:42.892 [362/743] Generating lib/rte_pcapng_mingw with a custom command 00:02:42.892 [363/743] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:42.892 [364/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:42.892 [365/743] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.892 [366/743] Linking target lib/librte_latencystats.so.23.0 00:02:42.892 [367/743] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:42.892 [368/743] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:43.150 [369/743] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:43.150 [370/743] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:43.150 [371/743] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:43.408 [372/743] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:43.408 [373/743] Generating lib/rte_power_def with a custom command 00:02:43.408 [374/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:43.408 [375/743] Generating lib/rte_power_mingw with a custom command 00:02:43.408 [376/743] Linking static target lib/librte_lpm.a 00:02:43.408 [377/743] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.408 [378/743] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:43.408 [379/743] Linking target lib/librte_eventdev.so.23.0 00:02:43.408 [380/743] Generating lib/rte_rawdev_def with a custom command 00:02:43.667 [381/743] Generating lib/rte_rawdev_mingw with a custom command 00:02:43.667 [382/743] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:43.667 [383/743] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:43.667 [384/743] Generating lib/rte_regexdev_def with a custom command 00:02:43.667 [385/743] Generating lib/rte_regexdev_mingw with a custom command 00:02:43.667 [386/743] Generating lib/rte_dmadev_def with a custom command 00:02:43.667 [387/743] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.667 [388/743] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:43.667 [389/743] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:43.667 [390/743] Generating lib/rte_dmadev_mingw with a custom command 00:02:43.667 [391/743] Linking static target lib/librte_pcapng.a 00:02:43.667 [392/743] Linking static target lib/librte_rawdev.a 00:02:43.667 [393/743] Linking target lib/librte_lpm.so.23.0 00:02:43.667 [394/743] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:43.925 [395/743] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:43.925 [396/743] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:43.925 [397/743] Generating lib/rte_rib_def with a custom command 00:02:43.925 [398/743] Generating lib/rte_rib_mingw with a custom command 00:02:43.925 [399/743] Generating lib/rte_reorder_def with a custom command 00:02:43.925 [400/743] Generating lib/rte_reorder_mingw with a custom command 00:02:43.925 [401/743] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.925 [402/743] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:43.925 [403/743] Linking static target lib/librte_power.a 00:02:43.925 [404/743] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:43.925 [405/743] Linking target lib/librte_pcapng.so.23.0 00:02:43.925 [406/743] Linking static target lib/librte_dmadev.a 00:02:44.184 [407/743] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.184 [408/743] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:44.184 [409/743] Linking target lib/librte_rawdev.so.23.0 00:02:44.184 [410/743] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:44.184 [411/743] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:44.184 [412/743] Linking static target lib/librte_regexdev.a 00:02:44.443 [413/743] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:44.443 [414/743] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:44.443 [415/743] Generating lib/rte_sched_def with a custom command 00:02:44.443 [416/743] Generating lib/rte_sched_mingw with a custom command 00:02:44.443 [417/743] Generating lib/rte_security_def with a custom command 00:02:44.443 [418/743] Generating lib/rte_security_mingw with a custom command 00:02:44.443 [419/743] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:44.443 [420/743] Linking static target lib/librte_member.a 00:02:44.443 [421/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:44.443 [422/743] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.702 [423/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:44.702 [424/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:44.702 [425/743] Linking target lib/librte_dmadev.so.23.0 00:02:44.702 [426/743] Generating lib/rte_stack_def with a custom command 00:02:44.702 [427/743] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:44.702 [428/743] Generating lib/rte_stack_mingw with a custom command 00:02:44.702 [429/743] Linking static target lib/librte_reorder.a 00:02:44.702 [430/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:44.702 [431/743] Linking static target lib/librte_stack.a 00:02:44.702 [432/743] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:44.702 [433/743] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:44.960 [434/743] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.960 [435/743] Linking target lib/librte_member.so.23.0 00:02:44.960 [436/743] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.960 [437/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:44.960 [438/743] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.960 [439/743] Linking static target lib/librte_rib.a 00:02:44.960 [440/743] Linking target lib/librte_stack.so.23.0 00:02:44.960 [441/743] Linking target lib/librte_reorder.so.23.0 00:02:44.960 [442/743] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.960 [443/743] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.960 [444/743] Linking target lib/librte_power.so.23.0 00:02:44.960 [445/743] Linking target lib/librte_regexdev.so.23.0 00:02:45.219 [446/743] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:45.219 [447/743] Linking static target lib/librte_security.a 00:02:45.219 [448/743] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.478 [449/743] Linking target lib/librte_rib.so.23.0 00:02:45.478 [450/743] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:45.478 [451/743] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:45.478 [452/743] Generating lib/rte_vhost_def with a custom command 00:02:45.478 [453/743] Generating lib/rte_vhost_mingw with a custom command 00:02:45.478 [454/743] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:45.736 [455/743] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.736 [456/743] Linking target lib/librte_security.so.23.0 00:02:45.736 [457/743] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:45.736 [458/743] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:45.995 [459/743] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:45.995 [460/743] Linking static target lib/librte_sched.a 00:02:46.254 [461/743] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.254 [462/743] Linking target lib/librte_sched.so.23.0 00:02:46.254 [463/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:46.254 [464/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:46.513 [465/743] Generating lib/rte_ipsec_def with a custom command 00:02:46.513 [466/743] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:46.513 [467/743] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:46.513 [468/743] Generating lib/rte_ipsec_mingw with a custom command 00:02:46.513 [469/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:46.513 [470/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:46.771 [471/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:47.030 [472/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:47.030 [473/743] Generating lib/rte_fib_def with a custom command 00:02:47.030 [474/743] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:47.030 [475/743] Generating lib/rte_fib_mingw with a custom command 00:02:47.030 [476/743] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:47.030 [477/743] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:47.030 [478/743] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:47.288 [479/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:47.288 [480/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:47.288 [481/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:47.288 [482/743] Linking static target lib/librte_ipsec.a 00:02:47.856 [483/743] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.856 [484/743] Linking target lib/librte_ipsec.so.23.0 00:02:47.856 [485/743] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:47.856 [486/743] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:47.856 [487/743] Linking static target lib/librte_fib.a 00:02:48.114 [488/743] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:48.114 [489/743] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:48.114 [490/743] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:48.114 [491/743] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:48.382 [492/743] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.382 [493/743] Linking target lib/librte_fib.so.23.0 00:02:48.382 [494/743] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:48.951 [495/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:48.951 [496/743] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:48.951 [497/743] Generating lib/rte_port_def with a custom command 00:02:48.951 [498/743] Generating lib/rte_port_mingw with a custom command 00:02:48.951 [499/743] Generating lib/rte_pdump_def with a custom command 00:02:48.951 [500/743] Generating lib/rte_pdump_mingw with a custom command 00:02:49.209 [501/743] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:49.209 [502/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:49.209 [503/743] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:49.209 [504/743] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:49.468 [505/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:49.468 [506/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:49.468 [507/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:49.468 [508/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:49.468 [509/743] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:49.468 [510/743] Linking static target lib/librte_port.a 00:02:49.726 [511/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:49.985 [512/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:49.985 [513/743] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.985 [514/743] Linking target lib/librte_port.so.23.0 00:02:49.985 [515/743] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:49.985 [516/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:50.243 [517/743] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:50.243 [518/743] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:50.243 [519/743] Linking static target lib/librte_pdump.a 00:02:50.243 [520/743] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:50.502 [521/743] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.502 [522/743] Linking target lib/librte_pdump.so.23.0 00:02:50.760 [523/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:50.760 [524/743] Generating lib/rte_table_def with a custom command 00:02:50.760 [525/743] Generating lib/rte_table_mingw with a custom command 00:02:50.760 [526/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:50.760 [527/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:51.019 [528/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:51.019 [529/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:51.277 [530/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:51.277 [531/743] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:51.277 [532/743] Generating lib/rte_pipeline_def with a custom command 00:02:51.277 [533/743] Generating lib/rte_pipeline_mingw with a custom command 00:02:51.277 [534/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:51.277 [535/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:51.536 [536/743] Linking static target lib/librte_table.a 00:02:51.536 [537/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:51.795 [538/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:52.053 [539/743] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.053 [540/743] Linking target lib/librte_table.so.23.0 00:02:52.053 [541/743] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:52.053 [542/743] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:52.053 [543/743] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:52.313 [544/743] Generating lib/rte_graph_def with a custom command 00:02:52.313 [545/743] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:52.313 [546/743] Generating lib/rte_graph_mingw with a custom command 00:02:52.571 [547/743] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:52.571 [548/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:52.571 [549/743] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:52.830 [550/743] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:52.830 [551/743] Linking static target lib/librte_graph.a 00:02:52.830 [552/743] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:53.089 [553/743] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:53.089 [554/743] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:53.089 [555/743] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:53.363 [556/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:53.363 [557/743] Generating lib/rte_node_def with a custom command 00:02:53.363 [558/743] Generating lib/rte_node_mingw with a custom command 00:02:53.676 [559/743] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:53.676 [560/743] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.676 [561/743] Linking target lib/librte_graph.so.23.0 00:02:53.676 [562/743] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:53.676 [563/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:53.676 [564/743] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:53.676 [565/743] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:53.676 [566/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:53.676 [567/743] Generating drivers/rte_bus_pci_def with a custom command 00:02:53.676 [568/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:53.676 [569/743] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:53.676 [570/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:53.941 [571/743] Generating drivers/rte_bus_vdev_def with a custom command 00:02:53.941 [572/743] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:53.941 [573/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:53.941 [574/743] Generating drivers/rte_mempool_ring_def with a custom command 00:02:53.941 [575/743] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:53.941 [576/743] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:53.941 [577/743] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:53.941 [578/743] Linking static target lib/librte_node.a 00:02:53.941 [579/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:53.941 [580/743] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:53.941 [581/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:54.200 [582/743] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.200 [583/743] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:54.200 [584/743] Linking target lib/librte_node.so.23.0 00:02:54.200 [585/743] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:54.200 [586/743] Linking static target drivers/librte_bus_vdev.a 00:02:54.200 [587/743] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:54.459 [588/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:54.459 [589/743] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:54.459 [590/743] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.459 [591/743] Linking target drivers/librte_bus_vdev.so.23.0 00:02:54.717 [592/743] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:54.717 [593/743] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:54.717 [594/743] Linking static target drivers/librte_bus_pci.a 00:02:54.717 [595/743] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:54.717 [596/743] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:54.717 [597/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:54.976 [598/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:54.976 [599/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:54.976 [600/743] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.976 [601/743] Linking target drivers/librte_bus_pci.so.23.0 00:02:54.976 [602/743] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:54.976 [603/743] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:55.235 [604/743] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:55.235 [605/743] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:55.235 [606/743] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:55.235 [607/743] Linking static target drivers/librte_mempool_ring.a 00:02:55.235 [608/743] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:55.235 [609/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:55.235 [610/743] Linking target drivers/librte_mempool_ring.so.23.0 00:02:55.803 [611/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:56.062 [612/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:56.062 [613/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:56.062 [614/743] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:56.630 [615/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:56.630 [616/743] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:56.630 [617/743] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:56.889 [618/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:57.147 [619/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:57.147 [620/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:57.406 [621/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:57.406 [622/743] Generating drivers/rte_net_i40e_def with a custom command 00:02:57.406 [623/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:57.406 [624/743] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:57.665 [625/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:58.601 [626/743] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:58.601 [627/743] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:58.601 [628/743] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:58.601 [629/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:58.860 [630/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:58.860 [631/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:58.860 [632/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:58.860 [633/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:58.860 [634/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:59.428 [635/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:59.428 [636/743] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:59.687 [637/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:59.687 [638/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:59.687 [639/743] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:59.946 [640/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:59.946 [641/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:59.946 [642/743] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:00.205 [643/743] Linking static target lib/librte_vhost.a 00:03:00.205 [644/743] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:00.205 [645/743] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:00.205 [646/743] Linking static target drivers/librte_net_i40e.a 00:03:00.205 [647/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:00.205 [648/743] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:00.464 [649/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:00.724 [650/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:00.724 [651/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:00.724 [652/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:00.724 [653/743] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.983 [654/743] Linking target drivers/librte_net_i40e.so.23.0 00:03:00.983 [655/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:00.983 [656/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:01.242 [657/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:01.242 [658/743] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.242 [659/743] Linking target lib/librte_vhost.so.23.0 00:03:01.809 [660/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:01.809 [661/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:01.809 [662/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:01.809 [663/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:01.809 [664/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:01.809 [665/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:01.809 [666/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:02.068 [667/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:02.068 [668/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:02.068 [669/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:02.326 [670/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:02.586 [671/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:02.586 [672/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:02.845 [673/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:03.104 [674/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:03.362 [675/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:03.621 [676/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:03.621 [677/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:03.621 [678/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:03.880 [679/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:03.880 [680/743] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:03.880 [681/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:04.139 [682/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:04.139 [683/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:04.398 [684/743] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:04.398 [685/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:04.657 [686/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:04.657 [687/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:04.657 [688/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:04.657 [689/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:04.916 [690/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:04.916 [691/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:04.916 [692/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:05.175 [693/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:05.175 [694/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:05.434 [695/743] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:05.434 [696/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:05.693 [697/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:05.693 [698/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:05.952 [699/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:05.952 [700/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:06.211 [701/743] Linking static target lib/librte_pipeline.a 00:03:06.469 [702/743] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:06.469 [703/743] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:06.469 [704/743] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:06.728 [705/743] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:06.728 [706/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:06.728 [707/743] Linking target app/dpdk-dumpcap 00:03:06.987 [708/743] Linking target app/dpdk-pdump 00:03:06.987 [709/743] Linking target app/dpdk-proc-info 00:03:06.987 [710/743] Linking target app/dpdk-test-acl 00:03:06.987 [711/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:07.246 [712/743] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:07.246 [713/743] Linking target app/dpdk-test-cmdline 00:03:07.246 [714/743] Linking target app/dpdk-test-bbdev 00:03:07.504 [715/743] Linking target app/dpdk-test-compress-perf 00:03:07.504 [716/743] Linking target app/dpdk-test-crypto-perf 00:03:07.504 [717/743] Linking target app/dpdk-test-eventdev 00:03:07.504 [718/743] Linking target app/dpdk-test-fib 00:03:07.504 [719/743] Linking target app/dpdk-test-flow-perf 00:03:07.762 [720/743] Linking target app/dpdk-test-gpudev 00:03:07.762 [721/743] Linking target app/dpdk-test-pipeline 00:03:08.021 [722/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:08.284 [723/743] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:08.284 [724/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:08.544 [725/743] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:08.544 [726/743] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:08.544 [727/743] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:08.544 [728/743] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.803 [729/743] Linking target lib/librte_pipeline.so.23.0 00:03:09.061 [730/743] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:09.062 [731/743] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:09.320 [732/743] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:09.320 [733/743] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:09.320 [734/743] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:09.320 [735/743] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:09.587 [736/743] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:09.845 [737/743] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:09.845 [738/743] Linking target app/dpdk-test-sad 00:03:09.845 [739/743] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:10.104 [740/743] Linking target app/dpdk-test-regex 00:03:10.104 [741/743] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:10.363 [742/743] Linking target app/dpdk-testpmd 00:03:10.621 [743/743] Linking target app/dpdk-test-security-perf 00:03:10.621 19:23:57 -- common/autobuild_common.sh@190 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:10.621 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:10.621 [0/1] Installing files. 00:03:10.881 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:10.881 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.143 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.143 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.143 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.143 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.143 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.143 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.143 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.143 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.143 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.143 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.143 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.143 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.143 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.143 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.143 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.143 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.143 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.143 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.143 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.143 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.143 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.143 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.143 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.143 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.143 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.143 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.143 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:11.144 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.145 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:11.146 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:11.146 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.146 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.146 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.146 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.146 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.146 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.146 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.146 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.146 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.146 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.146 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.146 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.146 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.146 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.146 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:11.406 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.406 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:11.407 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.407 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:11.407 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.407 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:11.407 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.407 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.407 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.407 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.407 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.407 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.407 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.407 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.407 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.407 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.407 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.407 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.407 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.407 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.407 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.407 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.407 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.407 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.668 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.669 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.670 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.670 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.670 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.670 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.670 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.670 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.670 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.670 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.670 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.670 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.670 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.670 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.670 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.670 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.670 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.670 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.670 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.670 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.670 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.670 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.670 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.670 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.670 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.670 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.670 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.670 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:11.670 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:11.670 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:11.670 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:11.670 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:11.670 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:11.670 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:11.670 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:11.670 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:11.670 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:11.670 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:11.670 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:11.670 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:11.670 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:11.670 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:11.670 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:11.670 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:11.670 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:11.670 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:11.670 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:11.670 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:11.670 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:11.670 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:11.670 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:11.670 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:11.670 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:11.670 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:11.670 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:11.670 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:11.670 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:11.670 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:11.670 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:11.670 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:11.670 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:11.670 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:11.670 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:11.670 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:11.670 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:11.670 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:11.670 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:11.670 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:11.670 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:11.670 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:11.670 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:11.670 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:11.670 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:11.670 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:11.670 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:11.670 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:11.670 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:11.670 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:11.670 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:11.670 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:11.670 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:11.670 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:11.670 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:11.670 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:11.670 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:11.670 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:11.670 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:11.670 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:11.670 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:11.670 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:11.670 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:11.670 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:11.670 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:11.670 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:11.670 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:11.670 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:11.670 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:11.670 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:11.670 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:11.670 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:11.670 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:11.670 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:11.670 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:11.670 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:11.670 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:11.670 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:11.670 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:11.670 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:11.670 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:11.670 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:11.670 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:11.670 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:11.670 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:11.670 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:11.671 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:11.671 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:11.671 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:11.671 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:11.671 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:11.671 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:11.671 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:11.671 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:11.671 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:11.671 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:11.671 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:11.671 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:11.671 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:11.671 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:11.671 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:11.671 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:11.671 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:11.671 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:11.671 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:11.671 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:11.671 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:11.671 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:11.671 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:11.671 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:11.671 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:11.671 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:11.671 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:11.671 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:11.671 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:11.671 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:11.671 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:11.671 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:11.671 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:11.671 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:11.671 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:11.671 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:11.671 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:11.671 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:11.671 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:11.671 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:11.671 19:23:58 -- common/autobuild_common.sh@192 -- $ uname -s 00:03:11.671 19:23:58 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:11.671 19:23:58 -- common/autobuild_common.sh@203 -- $ cat 00:03:11.671 ************************************ 00:03:11.671 END TEST build_native_dpdk 00:03:11.671 ************************************ 00:03:11.671 19:23:58 -- common/autobuild_common.sh@208 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:11.671 00:03:11.671 real 0m50.308s 00:03:11.671 user 5m49.164s 00:03:11.671 sys 1m1.576s 00:03:11.671 19:23:58 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:11.671 19:23:58 -- common/autotest_common.sh@10 -- $ set +x 00:03:11.671 19:23:58 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:11.671 19:23:58 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:11.671 19:23:58 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:11.671 19:23:58 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:11.671 19:23:58 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:11.671 19:23:58 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:11.671 19:23:58 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:11.671 19:23:58 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang --with-shared 00:03:11.929 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:12.188 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.188 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:12.188 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:12.446 Using 'verbs' RDMA provider 00:03:27.889 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:03:40.090 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:40.090 go version go1.21.1 linux/amd64 00:03:40.090 Creating mk/config.mk...done. 00:03:40.090 Creating mk/cc.flags.mk...done. 00:03:40.090 Type 'make' to build. 00:03:40.090 19:24:25 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:40.090 19:24:25 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:03:40.090 19:24:25 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:03:40.090 19:24:25 -- common/autotest_common.sh@10 -- $ set +x 00:03:40.090 ************************************ 00:03:40.090 START TEST make 00:03:40.090 ************************************ 00:03:40.090 19:24:26 -- common/autotest_common.sh@1114 -- $ make -j10 00:03:40.090 make[1]: Nothing to be done for 'all'. 00:04:06.629 CC lib/ut/ut.o 00:04:06.629 CC lib/log/log.o 00:04:06.629 CC lib/log/log_flags.o 00:04:06.629 CC lib/ut_mock/mock.o 00:04:06.629 CC lib/log/log_deprecated.o 00:04:06.630 LIB libspdk_ut_mock.a 00:04:06.630 LIB libspdk_ut.a 00:04:06.630 LIB libspdk_log.a 00:04:06.630 SO libspdk_ut_mock.so.5.0 00:04:06.630 SO libspdk_ut.so.1.0 00:04:06.630 SO libspdk_log.so.6.1 00:04:06.630 SYMLINK libspdk_ut_mock.so 00:04:06.630 SYMLINK libspdk_ut.so 00:04:06.630 SYMLINK libspdk_log.so 00:04:06.630 CXX lib/trace_parser/trace.o 00:04:06.630 CC lib/util/base64.o 00:04:06.630 CC lib/util/bit_array.o 00:04:06.630 CC lib/dma/dma.o 00:04:06.630 CC lib/util/crc16.o 00:04:06.630 CC lib/util/cpuset.o 00:04:06.630 CC lib/util/crc32.o 00:04:06.630 CC lib/util/crc32c.o 00:04:06.630 CC lib/ioat/ioat.o 00:04:06.630 CC lib/vfio_user/host/vfio_user_pci.o 00:04:06.630 CC lib/util/crc32_ieee.o 00:04:06.630 CC lib/util/crc64.o 00:04:06.630 CC lib/vfio_user/host/vfio_user.o 00:04:06.630 CC lib/util/dif.o 00:04:06.630 LIB libspdk_dma.a 00:04:06.630 CC lib/util/fd.o 00:04:06.630 CC lib/util/file.o 00:04:06.630 SO libspdk_dma.so.3.0 00:04:06.630 SYMLINK libspdk_dma.so 00:04:06.630 LIB libspdk_ioat.a 00:04:06.630 CC lib/util/hexlify.o 00:04:06.630 CC lib/util/iov.o 00:04:06.630 CC lib/util/math.o 00:04:06.630 CC lib/util/pipe.o 00:04:06.630 SO libspdk_ioat.so.6.0 00:04:06.630 CC lib/util/strerror_tls.o 00:04:06.630 CC lib/util/string.o 00:04:06.630 SYMLINK libspdk_ioat.so 00:04:06.630 CC lib/util/uuid.o 00:04:06.630 LIB libspdk_vfio_user.a 00:04:06.630 SO libspdk_vfio_user.so.4.0 00:04:06.630 CC lib/util/fd_group.o 00:04:06.630 SYMLINK libspdk_vfio_user.so 00:04:06.630 CC lib/util/xor.o 00:04:06.630 CC lib/util/zipf.o 00:04:06.630 LIB libspdk_util.a 00:04:06.630 SO libspdk_util.so.8.0 00:04:06.630 LIB libspdk_trace_parser.a 00:04:06.630 SYMLINK libspdk_util.so 00:04:06.630 SO libspdk_trace_parser.so.4.0 00:04:06.630 SYMLINK libspdk_trace_parser.so 00:04:06.630 CC lib/rdma/common.o 00:04:06.630 CC lib/rdma/rdma_verbs.o 00:04:06.630 CC lib/idxd/idxd.o 00:04:06.630 CC lib/idxd/idxd_user.o 00:04:06.630 CC lib/conf/conf.o 00:04:06.630 CC lib/idxd/idxd_kernel.o 00:04:06.630 CC lib/json/json_parse.o 00:04:06.630 CC lib/env_dpdk/env.o 00:04:06.630 CC lib/json/json_util.o 00:04:06.630 CC lib/vmd/vmd.o 00:04:06.630 CC lib/vmd/led.o 00:04:06.630 CC lib/env_dpdk/memory.o 00:04:06.630 CC lib/env_dpdk/pci.o 00:04:06.630 CC lib/json/json_write.o 00:04:06.630 LIB libspdk_conf.a 00:04:06.630 CC lib/env_dpdk/init.o 00:04:06.630 SO libspdk_conf.so.5.0 00:04:06.630 LIB libspdk_rdma.a 00:04:06.630 CC lib/env_dpdk/threads.o 00:04:06.630 SO libspdk_rdma.so.5.0 00:04:06.630 SYMLINK libspdk_conf.so 00:04:06.630 CC lib/env_dpdk/pci_ioat.o 00:04:06.630 SYMLINK libspdk_rdma.so 00:04:06.630 CC lib/env_dpdk/pci_virtio.o 00:04:06.630 CC lib/env_dpdk/pci_vmd.o 00:04:06.630 CC lib/env_dpdk/pci_idxd.o 00:04:06.630 LIB libspdk_json.a 00:04:06.630 CC lib/env_dpdk/pci_event.o 00:04:06.630 LIB libspdk_idxd.a 00:04:06.630 CC lib/env_dpdk/sigbus_handler.o 00:04:06.630 SO libspdk_json.so.5.1 00:04:06.630 CC lib/env_dpdk/pci_dpdk.o 00:04:06.630 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:06.630 SO libspdk_idxd.so.11.0 00:04:06.630 SYMLINK libspdk_json.so 00:04:06.630 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:06.630 SYMLINK libspdk_idxd.so 00:04:06.630 LIB libspdk_vmd.a 00:04:06.630 SO libspdk_vmd.so.5.0 00:04:06.630 SYMLINK libspdk_vmd.so 00:04:06.630 CC lib/jsonrpc/jsonrpc_server.o 00:04:06.630 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:06.630 CC lib/jsonrpc/jsonrpc_client.o 00:04:06.630 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:06.630 LIB libspdk_jsonrpc.a 00:04:06.630 SO libspdk_jsonrpc.so.5.1 00:04:06.630 SYMLINK libspdk_jsonrpc.so 00:04:06.630 CC lib/rpc/rpc.o 00:04:06.630 LIB libspdk_env_dpdk.a 00:04:06.630 SO libspdk_env_dpdk.so.13.0 00:04:06.630 LIB libspdk_rpc.a 00:04:06.630 SO libspdk_rpc.so.5.0 00:04:06.630 SYMLINK libspdk_rpc.so 00:04:06.630 SYMLINK libspdk_env_dpdk.so 00:04:06.630 CC lib/trace/trace_flags.o 00:04:06.630 CC lib/trace/trace.o 00:04:06.630 CC lib/notify/notify.o 00:04:06.630 CC lib/trace/trace_rpc.o 00:04:06.630 CC lib/notify/notify_rpc.o 00:04:06.630 CC lib/sock/sock_rpc.o 00:04:06.630 CC lib/sock/sock.o 00:04:06.630 LIB libspdk_notify.a 00:04:06.630 SO libspdk_notify.so.5.0 00:04:06.630 LIB libspdk_trace.a 00:04:06.630 SO libspdk_trace.so.9.0 00:04:06.630 SYMLINK libspdk_notify.so 00:04:06.630 SYMLINK libspdk_trace.so 00:04:06.630 LIB libspdk_sock.a 00:04:06.630 SO libspdk_sock.so.8.0 00:04:06.630 SYMLINK libspdk_sock.so 00:04:06.630 CC lib/thread/thread.o 00:04:06.630 CC lib/thread/iobuf.o 00:04:06.630 CC lib/nvme/nvme_ctrlr.o 00:04:06.630 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:06.630 CC lib/nvme/nvme_fabric.o 00:04:06.630 CC lib/nvme/nvme_ns_cmd.o 00:04:06.630 CC lib/nvme/nvme_ns.o 00:04:06.630 CC lib/nvme/nvme_pcie_common.o 00:04:06.630 CC lib/nvme/nvme_pcie.o 00:04:06.630 CC lib/nvme/nvme_qpair.o 00:04:06.889 CC lib/nvme/nvme.o 00:04:07.148 CC lib/nvme/nvme_quirks.o 00:04:07.406 CC lib/nvme/nvme_transport.o 00:04:07.406 CC lib/nvme/nvme_discovery.o 00:04:07.406 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:07.406 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:07.406 CC lib/nvme/nvme_tcp.o 00:04:07.665 CC lib/nvme/nvme_opal.o 00:04:07.665 CC lib/nvme/nvme_io_msg.o 00:04:07.665 CC lib/nvme/nvme_poll_group.o 00:04:07.924 CC lib/nvme/nvme_zns.o 00:04:07.924 LIB libspdk_thread.a 00:04:07.924 CC lib/nvme/nvme_cuse.o 00:04:07.924 CC lib/nvme/nvme_vfio_user.o 00:04:08.183 SO libspdk_thread.so.9.0 00:04:08.183 CC lib/nvme/nvme_rdma.o 00:04:08.183 SYMLINK libspdk_thread.so 00:04:08.183 CC lib/accel/accel.o 00:04:08.183 CC lib/blob/blobstore.o 00:04:08.445 CC lib/init/json_config.o 00:04:08.445 CC lib/virtio/virtio.o 00:04:08.703 CC lib/init/subsystem.o 00:04:08.703 CC lib/blob/request.o 00:04:08.703 CC lib/blob/zeroes.o 00:04:08.703 CC lib/blob/blob_bs_dev.o 00:04:08.703 CC lib/virtio/virtio_vhost_user.o 00:04:08.703 CC lib/init/subsystem_rpc.o 00:04:08.703 CC lib/init/rpc.o 00:04:08.703 CC lib/accel/accel_rpc.o 00:04:08.962 CC lib/virtio/virtio_vfio_user.o 00:04:08.962 CC lib/accel/accel_sw.o 00:04:08.962 CC lib/virtio/virtio_pci.o 00:04:08.962 LIB libspdk_init.a 00:04:08.962 SO libspdk_init.so.4.0 00:04:08.962 SYMLINK libspdk_init.so 00:04:09.221 CC lib/event/reactor.o 00:04:09.221 CC lib/event/app.o 00:04:09.221 CC lib/event/log_rpc.o 00:04:09.221 CC lib/event/app_rpc.o 00:04:09.221 CC lib/event/scheduler_static.o 00:04:09.221 LIB libspdk_virtio.a 00:04:09.221 LIB libspdk_accel.a 00:04:09.221 SO libspdk_virtio.so.6.0 00:04:09.221 SO libspdk_accel.so.14.0 00:04:09.221 SYMLINK libspdk_virtio.so 00:04:09.221 SYMLINK libspdk_accel.so 00:04:09.480 LIB libspdk_nvme.a 00:04:09.480 CC lib/bdev/bdev.o 00:04:09.480 CC lib/bdev/bdev_rpc.o 00:04:09.480 CC lib/bdev/part.o 00:04:09.480 CC lib/bdev/scsi_nvme.o 00:04:09.480 CC lib/bdev/bdev_zone.o 00:04:09.480 LIB libspdk_event.a 00:04:09.738 SO libspdk_event.so.12.0 00:04:09.738 SO libspdk_nvme.so.12.0 00:04:09.738 SYMLINK libspdk_event.so 00:04:09.997 SYMLINK libspdk_nvme.so 00:04:10.934 LIB libspdk_blob.a 00:04:10.934 SO libspdk_blob.so.10.1 00:04:10.934 SYMLINK libspdk_blob.so 00:04:11.193 CC lib/blobfs/tree.o 00:04:11.193 CC lib/blobfs/blobfs.o 00:04:11.193 CC lib/lvol/lvol.o 00:04:11.762 LIB libspdk_bdev.a 00:04:12.021 LIB libspdk_blobfs.a 00:04:12.021 SO libspdk_bdev.so.14.0 00:04:12.021 SO libspdk_blobfs.so.9.0 00:04:12.021 SYMLINK libspdk_bdev.so 00:04:12.021 SYMLINK libspdk_blobfs.so 00:04:12.021 LIB libspdk_lvol.a 00:04:12.021 SO libspdk_lvol.so.9.1 00:04:12.279 CC lib/nvmf/ctrlr.o 00:04:12.279 CC lib/nvmf/ctrlr_bdev.o 00:04:12.280 CC lib/nvmf/ctrlr_discovery.o 00:04:12.280 CC lib/nvmf/subsystem.o 00:04:12.280 CC lib/nvmf/nvmf.o 00:04:12.280 CC lib/scsi/dev.o 00:04:12.280 CC lib/ublk/ublk.o 00:04:12.280 CC lib/nbd/nbd.o 00:04:12.280 CC lib/ftl/ftl_core.o 00:04:12.280 SYMLINK libspdk_lvol.so 00:04:12.280 CC lib/ftl/ftl_init.o 00:04:12.539 CC lib/nbd/nbd_rpc.o 00:04:12.539 CC lib/scsi/lun.o 00:04:12.539 CC lib/ftl/ftl_layout.o 00:04:12.539 CC lib/ftl/ftl_debug.o 00:04:12.539 LIB libspdk_nbd.a 00:04:12.539 CC lib/nvmf/nvmf_rpc.o 00:04:12.539 SO libspdk_nbd.so.6.0 00:04:12.798 SYMLINK libspdk_nbd.so 00:04:12.798 CC lib/nvmf/transport.o 00:04:12.798 CC lib/scsi/port.o 00:04:12.798 CC lib/ublk/ublk_rpc.o 00:04:12.798 CC lib/nvmf/tcp.o 00:04:12.798 CC lib/scsi/scsi.o 00:04:12.798 CC lib/ftl/ftl_io.o 00:04:12.798 CC lib/ftl/ftl_sb.o 00:04:12.798 LIB libspdk_ublk.a 00:04:13.057 SO libspdk_ublk.so.2.0 00:04:13.057 SYMLINK libspdk_ublk.so 00:04:13.057 CC lib/ftl/ftl_l2p.o 00:04:13.057 CC lib/scsi/scsi_bdev.o 00:04:13.057 CC lib/scsi/scsi_pr.o 00:04:13.057 CC lib/scsi/scsi_rpc.o 00:04:13.057 CC lib/scsi/task.o 00:04:13.317 CC lib/ftl/ftl_l2p_flat.o 00:04:13.317 CC lib/ftl/ftl_nv_cache.o 00:04:13.317 CC lib/nvmf/rdma.o 00:04:13.317 CC lib/ftl/ftl_band.o 00:04:13.317 CC lib/ftl/ftl_band_ops.o 00:04:13.317 CC lib/ftl/ftl_writer.o 00:04:13.317 CC lib/ftl/ftl_rq.o 00:04:13.317 CC lib/ftl/ftl_reloc.o 00:04:13.578 LIB libspdk_scsi.a 00:04:13.578 SO libspdk_scsi.so.8.0 00:04:13.578 CC lib/ftl/ftl_l2p_cache.o 00:04:13.578 CC lib/ftl/ftl_p2l.o 00:04:13.578 SYMLINK libspdk_scsi.so 00:04:13.578 CC lib/ftl/mngt/ftl_mngt.o 00:04:13.837 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:13.837 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:13.837 CC lib/iscsi/conn.o 00:04:13.837 CC lib/vhost/vhost.o 00:04:13.837 CC lib/vhost/vhost_rpc.o 00:04:13.837 CC lib/vhost/vhost_scsi.o 00:04:13.837 CC lib/vhost/vhost_blk.o 00:04:14.096 CC lib/iscsi/init_grp.o 00:04:14.096 CC lib/iscsi/iscsi.o 00:04:14.096 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:14.354 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:14.354 CC lib/iscsi/md5.o 00:04:14.354 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:14.354 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:14.354 CC lib/vhost/rte_vhost_user.o 00:04:14.354 CC lib/iscsi/param.o 00:04:14.613 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:14.613 CC lib/iscsi/portal_grp.o 00:04:14.613 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:14.613 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:14.613 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:14.873 CC lib/iscsi/tgt_node.o 00:04:14.873 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:14.873 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:14.873 CC lib/iscsi/iscsi_subsystem.o 00:04:14.873 CC lib/iscsi/iscsi_rpc.o 00:04:14.873 CC lib/iscsi/task.o 00:04:14.873 CC lib/ftl/utils/ftl_conf.o 00:04:14.873 CC lib/ftl/utils/ftl_md.o 00:04:15.131 CC lib/ftl/utils/ftl_mempool.o 00:04:15.131 CC lib/ftl/utils/ftl_bitmap.o 00:04:15.131 LIB libspdk_nvmf.a 00:04:15.131 CC lib/ftl/utils/ftl_property.o 00:04:15.131 SO libspdk_nvmf.so.17.0 00:04:15.131 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:15.131 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:15.131 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:15.131 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:15.390 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:15.390 SYMLINK libspdk_nvmf.so 00:04:15.390 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:15.390 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:15.390 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:15.390 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:15.390 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:15.390 CC lib/ftl/base/ftl_base_dev.o 00:04:15.390 LIB libspdk_vhost.a 00:04:15.390 CC lib/ftl/base/ftl_base_bdev.o 00:04:15.390 LIB libspdk_iscsi.a 00:04:15.390 CC lib/ftl/ftl_trace.o 00:04:15.648 SO libspdk_vhost.so.7.1 00:04:15.648 SO libspdk_iscsi.so.7.0 00:04:15.648 SYMLINK libspdk_vhost.so 00:04:15.648 SYMLINK libspdk_iscsi.so 00:04:15.648 LIB libspdk_ftl.a 00:04:15.907 SO libspdk_ftl.so.8.0 00:04:16.165 SYMLINK libspdk_ftl.so 00:04:16.424 CC module/env_dpdk/env_dpdk_rpc.o 00:04:16.424 CC module/accel/ioat/accel_ioat.o 00:04:16.424 CC module/accel/error/accel_error.o 00:04:16.424 CC module/sock/posix/posix.o 00:04:16.424 CC module/accel/iaa/accel_iaa.o 00:04:16.424 CC module/accel/dsa/accel_dsa.o 00:04:16.424 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:16.424 CC module/blob/bdev/blob_bdev.o 00:04:16.424 CC module/scheduler/gscheduler/gscheduler.o 00:04:16.424 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:16.683 LIB libspdk_env_dpdk_rpc.a 00:04:16.683 SO libspdk_env_dpdk_rpc.so.5.0 00:04:16.683 SYMLINK libspdk_env_dpdk_rpc.so 00:04:16.683 LIB libspdk_scheduler_dpdk_governor.a 00:04:16.683 CC module/accel/error/accel_error_rpc.o 00:04:16.683 CC module/accel/ioat/accel_ioat_rpc.o 00:04:16.683 CC module/accel/iaa/accel_iaa_rpc.o 00:04:16.683 LIB libspdk_scheduler_dynamic.a 00:04:16.683 LIB libspdk_scheduler_gscheduler.a 00:04:16.683 SO libspdk_scheduler_dpdk_governor.so.3.0 00:04:16.683 SO libspdk_scheduler_dynamic.so.3.0 00:04:16.683 SO libspdk_scheduler_gscheduler.so.3.0 00:04:16.683 CC module/accel/dsa/accel_dsa_rpc.o 00:04:16.683 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:16.683 LIB libspdk_blob_bdev.a 00:04:16.683 SYMLINK libspdk_scheduler_dynamic.so 00:04:16.683 SYMLINK libspdk_scheduler_gscheduler.so 00:04:16.683 SO libspdk_blob_bdev.so.10.1 00:04:16.683 LIB libspdk_accel_ioat.a 00:04:16.683 LIB libspdk_accel_error.a 00:04:16.683 LIB libspdk_accel_iaa.a 00:04:16.942 SYMLINK libspdk_blob_bdev.so 00:04:16.943 SO libspdk_accel_error.so.1.0 00:04:16.943 SO libspdk_accel_ioat.so.5.0 00:04:16.943 SO libspdk_accel_iaa.so.2.0 00:04:16.943 LIB libspdk_accel_dsa.a 00:04:16.943 SO libspdk_accel_dsa.so.4.0 00:04:16.943 SYMLINK libspdk_accel_iaa.so 00:04:16.943 SYMLINK libspdk_accel_error.so 00:04:16.943 SYMLINK libspdk_accel_ioat.so 00:04:16.943 SYMLINK libspdk_accel_dsa.so 00:04:16.943 CC module/bdev/delay/vbdev_delay.o 00:04:16.943 CC module/bdev/malloc/bdev_malloc.o 00:04:16.943 CC module/bdev/error/vbdev_error.o 00:04:16.943 CC module/bdev/gpt/gpt.o 00:04:16.943 CC module/bdev/null/bdev_null.o 00:04:16.943 CC module/bdev/lvol/vbdev_lvol.o 00:04:16.943 CC module/blobfs/bdev/blobfs_bdev.o 00:04:16.943 CC module/bdev/nvme/bdev_nvme.o 00:04:16.943 CC module/bdev/passthru/vbdev_passthru.o 00:04:17.202 LIB libspdk_sock_posix.a 00:04:17.202 CC module/bdev/gpt/vbdev_gpt.o 00:04:17.202 SO libspdk_sock_posix.so.5.0 00:04:17.202 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:17.202 CC module/bdev/null/bdev_null_rpc.o 00:04:17.202 CC module/bdev/error/vbdev_error_rpc.o 00:04:17.461 SYMLINK libspdk_sock_posix.so 00:04:17.461 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:17.461 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:17.461 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:17.461 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:17.461 LIB libspdk_blobfs_bdev.a 00:04:17.461 SO libspdk_blobfs_bdev.so.5.0 00:04:17.461 LIB libspdk_bdev_gpt.a 00:04:17.461 LIB libspdk_bdev_null.a 00:04:17.461 LIB libspdk_bdev_error.a 00:04:17.461 SO libspdk_bdev_gpt.so.5.0 00:04:17.461 SO libspdk_bdev_null.so.5.0 00:04:17.461 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:17.461 SO libspdk_bdev_error.so.5.0 00:04:17.461 SYMLINK libspdk_blobfs_bdev.so 00:04:17.461 LIB libspdk_bdev_malloc.a 00:04:17.461 LIB libspdk_bdev_delay.a 00:04:17.461 SYMLINK libspdk_bdev_null.so 00:04:17.461 CC module/bdev/nvme/nvme_rpc.o 00:04:17.461 SYMLINK libspdk_bdev_gpt.so 00:04:17.461 LIB libspdk_bdev_passthru.a 00:04:17.461 SO libspdk_bdev_malloc.so.5.0 00:04:17.461 SO libspdk_bdev_delay.so.5.0 00:04:17.461 SYMLINK libspdk_bdev_error.so 00:04:17.720 SO libspdk_bdev_passthru.so.5.0 00:04:17.720 SYMLINK libspdk_bdev_malloc.so 00:04:17.720 SYMLINK libspdk_bdev_delay.so 00:04:17.720 CC module/bdev/nvme/bdev_mdns_client.o 00:04:17.720 CC module/bdev/raid/bdev_raid.o 00:04:17.720 SYMLINK libspdk_bdev_passthru.so 00:04:17.720 CC module/bdev/split/vbdev_split.o 00:04:17.720 CC module/bdev/split/vbdev_split_rpc.o 00:04:17.720 LIB libspdk_bdev_lvol.a 00:04:17.720 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:17.720 SO libspdk_bdev_lvol.so.5.0 00:04:17.720 CC module/bdev/aio/bdev_aio.o 00:04:17.720 SYMLINK libspdk_bdev_lvol.so 00:04:17.720 CC module/bdev/aio/bdev_aio_rpc.o 00:04:17.720 CC module/bdev/nvme/vbdev_opal.o 00:04:17.979 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:17.979 LIB libspdk_bdev_split.a 00:04:17.979 SO libspdk_bdev_split.so.5.0 00:04:17.979 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:17.979 SYMLINK libspdk_bdev_split.so 00:04:17.979 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:17.979 CC module/bdev/raid/bdev_raid_rpc.o 00:04:17.979 CC module/bdev/ftl/bdev_ftl.o 00:04:17.979 CC module/bdev/iscsi/bdev_iscsi.o 00:04:17.979 CC module/bdev/raid/bdev_raid_sb.o 00:04:17.979 LIB libspdk_bdev_aio.a 00:04:18.238 SO libspdk_bdev_aio.so.5.0 00:04:18.238 CC module/bdev/raid/raid0.o 00:04:18.238 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:18.238 SYMLINK libspdk_bdev_aio.so 00:04:18.238 CC module/bdev/raid/raid1.o 00:04:18.238 LIB libspdk_bdev_zone_block.a 00:04:18.238 SO libspdk_bdev_zone_block.so.5.0 00:04:18.238 CC module/bdev/raid/concat.o 00:04:18.238 SYMLINK libspdk_bdev_zone_block.so 00:04:18.238 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:18.238 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:18.238 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:18.497 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:18.497 LIB libspdk_bdev_iscsi.a 00:04:18.497 SO libspdk_bdev_iscsi.so.5.0 00:04:18.497 LIB libspdk_bdev_raid.a 00:04:18.497 LIB libspdk_bdev_ftl.a 00:04:18.497 SO libspdk_bdev_ftl.so.5.0 00:04:18.497 SYMLINK libspdk_bdev_iscsi.so 00:04:18.497 SO libspdk_bdev_raid.so.5.0 00:04:18.756 SYMLINK libspdk_bdev_ftl.so 00:04:18.756 SYMLINK libspdk_bdev_raid.so 00:04:18.756 LIB libspdk_bdev_virtio.a 00:04:18.756 SO libspdk_bdev_virtio.so.5.0 00:04:18.756 SYMLINK libspdk_bdev_virtio.so 00:04:19.014 LIB libspdk_bdev_nvme.a 00:04:19.273 SO libspdk_bdev_nvme.so.6.0 00:04:19.273 SYMLINK libspdk_bdev_nvme.so 00:04:19.532 CC module/event/subsystems/iobuf/iobuf.o 00:04:19.532 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:19.532 CC module/event/subsystems/scheduler/scheduler.o 00:04:19.532 CC module/event/subsystems/vmd/vmd.o 00:04:19.532 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:19.532 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:19.532 CC module/event/subsystems/sock/sock.o 00:04:19.817 LIB libspdk_event_scheduler.a 00:04:19.817 LIB libspdk_event_vhost_blk.a 00:04:19.817 LIB libspdk_event_iobuf.a 00:04:19.818 LIB libspdk_event_vmd.a 00:04:19.818 LIB libspdk_event_sock.a 00:04:19.818 SO libspdk_event_vhost_blk.so.2.0 00:04:19.818 SO libspdk_event_scheduler.so.3.0 00:04:19.818 SO libspdk_event_iobuf.so.2.0 00:04:19.818 SO libspdk_event_sock.so.4.0 00:04:19.818 SO libspdk_event_vmd.so.5.0 00:04:19.818 SYMLINK libspdk_event_vhost_blk.so 00:04:19.818 SYMLINK libspdk_event_scheduler.so 00:04:19.818 SYMLINK libspdk_event_iobuf.so 00:04:19.818 SYMLINK libspdk_event_sock.so 00:04:19.818 SYMLINK libspdk_event_vmd.so 00:04:20.121 CC module/event/subsystems/accel/accel.o 00:04:20.122 LIB libspdk_event_accel.a 00:04:20.122 SO libspdk_event_accel.so.5.0 00:04:20.122 SYMLINK libspdk_event_accel.so 00:04:20.391 CC module/event/subsystems/bdev/bdev.o 00:04:20.650 LIB libspdk_event_bdev.a 00:04:20.650 SO libspdk_event_bdev.so.5.0 00:04:20.650 SYMLINK libspdk_event_bdev.so 00:04:20.909 CC module/event/subsystems/scsi/scsi.o 00:04:20.909 CC module/event/subsystems/ublk/ublk.o 00:04:20.909 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:20.909 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:20.909 CC module/event/subsystems/nbd/nbd.o 00:04:20.909 LIB libspdk_event_ublk.a 00:04:20.909 LIB libspdk_event_nbd.a 00:04:20.909 LIB libspdk_event_scsi.a 00:04:20.909 SO libspdk_event_ublk.so.2.0 00:04:20.909 SO libspdk_event_nbd.so.5.0 00:04:20.909 SO libspdk_event_scsi.so.5.0 00:04:21.167 SYMLINK libspdk_event_nbd.so 00:04:21.167 SYMLINK libspdk_event_ublk.so 00:04:21.167 LIB libspdk_event_nvmf.a 00:04:21.167 SYMLINK libspdk_event_scsi.so 00:04:21.167 SO libspdk_event_nvmf.so.5.0 00:04:21.167 SYMLINK libspdk_event_nvmf.so 00:04:21.168 CC module/event/subsystems/iscsi/iscsi.o 00:04:21.168 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:21.426 LIB libspdk_event_vhost_scsi.a 00:04:21.426 LIB libspdk_event_iscsi.a 00:04:21.426 SO libspdk_event_vhost_scsi.so.2.0 00:04:21.426 SO libspdk_event_iscsi.so.5.0 00:04:21.426 SYMLINK libspdk_event_vhost_scsi.so 00:04:21.685 SYMLINK libspdk_event_iscsi.so 00:04:21.685 SO libspdk.so.5.0 00:04:21.685 SYMLINK libspdk.so 00:04:21.944 CC app/trace_record/trace_record.o 00:04:21.944 CXX app/trace/trace.o 00:04:21.944 CC app/spdk_nvme_perf/perf.o 00:04:21.944 CC app/spdk_lspci/spdk_lspci.o 00:04:21.944 CC app/nvmf_tgt/nvmf_main.o 00:04:21.944 CC app/iscsi_tgt/iscsi_tgt.o 00:04:21.944 CC examples/accel/perf/accel_perf.o 00:04:21.944 CC app/spdk_tgt/spdk_tgt.o 00:04:21.944 CC test/app/bdev_svc/bdev_svc.o 00:04:21.944 CC test/accel/dif/dif.o 00:04:21.944 LINK spdk_lspci 00:04:22.203 LINK nvmf_tgt 00:04:22.203 LINK spdk_trace_record 00:04:22.203 LINK spdk_tgt 00:04:22.203 LINK iscsi_tgt 00:04:22.203 LINK bdev_svc 00:04:22.462 LINK spdk_trace 00:04:22.462 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:22.462 LINK dif 00:04:22.462 LINK accel_perf 00:04:22.462 CC test/app/histogram_perf/histogram_perf.o 00:04:22.462 CC app/spdk_nvme_discover/discovery_aer.o 00:04:22.462 CC app/spdk_nvme_identify/identify.o 00:04:22.463 CC test/app/jsoncat/jsoncat.o 00:04:22.463 CC app/spdk_top/spdk_top.o 00:04:22.722 LINK histogram_perf 00:04:22.722 LINK jsoncat 00:04:22.722 CC app/vhost/vhost.o 00:04:22.722 LINK spdk_nvme_discover 00:04:22.722 CC app/spdk_dd/spdk_dd.o 00:04:22.722 LINK spdk_nvme_perf 00:04:22.722 LINK nvme_fuzz 00:04:22.722 CC examples/bdev/hello_world/hello_bdev.o 00:04:22.981 LINK vhost 00:04:22.981 CC app/fio/nvme/fio_plugin.o 00:04:22.981 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:22.981 CC test/app/stub/stub.o 00:04:22.981 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:22.981 CC examples/bdev/bdevperf/bdevperf.o 00:04:22.981 LINK hello_bdev 00:04:22.981 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:22.981 LINK stub 00:04:22.981 LINK spdk_dd 00:04:23.240 CC app/fio/bdev/fio_plugin.o 00:04:23.240 LINK spdk_nvme_identify 00:04:23.498 LINK spdk_top 00:04:23.498 CC test/bdev/bdevio/bdevio.o 00:04:23.498 CC test/blobfs/mkfs/mkfs.o 00:04:23.498 CC examples/blob/hello_world/hello_blob.o 00:04:23.498 LINK spdk_nvme 00:04:23.498 LINK vhost_fuzz 00:04:23.498 TEST_HEADER include/spdk/accel.h 00:04:23.498 TEST_HEADER include/spdk/accel_module.h 00:04:23.498 TEST_HEADER include/spdk/assert.h 00:04:23.498 TEST_HEADER include/spdk/barrier.h 00:04:23.498 TEST_HEADER include/spdk/base64.h 00:04:23.498 TEST_HEADER include/spdk/bdev.h 00:04:23.498 TEST_HEADER include/spdk/bdev_module.h 00:04:23.498 TEST_HEADER include/spdk/bdev_zone.h 00:04:23.498 TEST_HEADER include/spdk/bit_array.h 00:04:23.498 TEST_HEADER include/spdk/bit_pool.h 00:04:23.498 TEST_HEADER include/spdk/blob_bdev.h 00:04:23.498 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:23.498 TEST_HEADER include/spdk/blobfs.h 00:04:23.498 TEST_HEADER include/spdk/blob.h 00:04:23.498 TEST_HEADER include/spdk/conf.h 00:04:23.498 TEST_HEADER include/spdk/config.h 00:04:23.498 TEST_HEADER include/spdk/cpuset.h 00:04:23.498 TEST_HEADER include/spdk/crc16.h 00:04:23.498 TEST_HEADER include/spdk/crc32.h 00:04:23.498 TEST_HEADER include/spdk/crc64.h 00:04:23.498 TEST_HEADER include/spdk/dif.h 00:04:23.498 TEST_HEADER include/spdk/dma.h 00:04:23.498 TEST_HEADER include/spdk/endian.h 00:04:23.498 TEST_HEADER include/spdk/env_dpdk.h 00:04:23.498 TEST_HEADER include/spdk/env.h 00:04:23.498 TEST_HEADER include/spdk/event.h 00:04:23.498 TEST_HEADER include/spdk/fd_group.h 00:04:23.498 TEST_HEADER include/spdk/fd.h 00:04:23.498 TEST_HEADER include/spdk/file.h 00:04:23.498 TEST_HEADER include/spdk/ftl.h 00:04:23.498 TEST_HEADER include/spdk/gpt_spec.h 00:04:23.498 TEST_HEADER include/spdk/hexlify.h 00:04:23.498 TEST_HEADER include/spdk/histogram_data.h 00:04:23.498 TEST_HEADER include/spdk/idxd.h 00:04:23.498 TEST_HEADER include/spdk/idxd_spec.h 00:04:23.498 TEST_HEADER include/spdk/init.h 00:04:23.498 TEST_HEADER include/spdk/ioat.h 00:04:23.498 TEST_HEADER include/spdk/ioat_spec.h 00:04:23.498 TEST_HEADER include/spdk/iscsi_spec.h 00:04:23.498 TEST_HEADER include/spdk/json.h 00:04:23.498 TEST_HEADER include/spdk/jsonrpc.h 00:04:23.498 TEST_HEADER include/spdk/likely.h 00:04:23.498 TEST_HEADER include/spdk/log.h 00:04:23.498 TEST_HEADER include/spdk/lvol.h 00:04:23.498 TEST_HEADER include/spdk/memory.h 00:04:23.498 TEST_HEADER include/spdk/mmio.h 00:04:23.498 TEST_HEADER include/spdk/nbd.h 00:04:23.498 TEST_HEADER include/spdk/notify.h 00:04:23.498 TEST_HEADER include/spdk/nvme.h 00:04:23.498 LINK mkfs 00:04:23.498 TEST_HEADER include/spdk/nvme_intel.h 00:04:23.498 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:23.757 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:23.757 TEST_HEADER include/spdk/nvme_spec.h 00:04:23.757 TEST_HEADER include/spdk/nvme_zns.h 00:04:23.757 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:23.757 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:23.757 TEST_HEADER include/spdk/nvmf.h 00:04:23.757 TEST_HEADER include/spdk/nvmf_spec.h 00:04:23.757 TEST_HEADER include/spdk/nvmf_transport.h 00:04:23.757 TEST_HEADER include/spdk/opal.h 00:04:23.757 TEST_HEADER include/spdk/opal_spec.h 00:04:23.757 TEST_HEADER include/spdk/pci_ids.h 00:04:23.757 TEST_HEADER include/spdk/pipe.h 00:04:23.757 TEST_HEADER include/spdk/queue.h 00:04:23.757 TEST_HEADER include/spdk/reduce.h 00:04:23.757 TEST_HEADER include/spdk/rpc.h 00:04:23.757 TEST_HEADER include/spdk/scheduler.h 00:04:23.757 TEST_HEADER include/spdk/scsi.h 00:04:23.757 TEST_HEADER include/spdk/scsi_spec.h 00:04:23.757 TEST_HEADER include/spdk/sock.h 00:04:23.757 TEST_HEADER include/spdk/stdinc.h 00:04:23.757 TEST_HEADER include/spdk/string.h 00:04:23.757 TEST_HEADER include/spdk/thread.h 00:04:23.757 LINK hello_blob 00:04:23.757 TEST_HEADER include/spdk/trace.h 00:04:23.757 TEST_HEADER include/spdk/trace_parser.h 00:04:23.757 TEST_HEADER include/spdk/tree.h 00:04:23.757 CC test/dma/test_dma/test_dma.o 00:04:23.757 TEST_HEADER include/spdk/ublk.h 00:04:23.757 TEST_HEADER include/spdk/util.h 00:04:23.757 TEST_HEADER include/spdk/uuid.h 00:04:23.757 TEST_HEADER include/spdk/version.h 00:04:23.757 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:23.757 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:23.757 TEST_HEADER include/spdk/vhost.h 00:04:23.757 TEST_HEADER include/spdk/vmd.h 00:04:23.757 LINK spdk_bdev 00:04:23.757 TEST_HEADER include/spdk/xor.h 00:04:23.757 TEST_HEADER include/spdk/zipf.h 00:04:23.757 CXX test/cpp_headers/accel.o 00:04:23.757 CC test/env/mem_callbacks/mem_callbacks.o 00:04:23.757 CC test/event/event_perf/event_perf.o 00:04:23.757 LINK bdevio 00:04:23.757 CXX test/cpp_headers/accel_module.o 00:04:23.757 LINK bdevperf 00:04:23.757 CXX test/cpp_headers/assert.o 00:04:24.017 LINK event_perf 00:04:24.017 LINK mem_callbacks 00:04:24.017 CC examples/blob/cli/blobcli.o 00:04:24.017 CC test/event/reactor/reactor.o 00:04:24.017 CXX test/cpp_headers/barrier.o 00:04:24.017 CXX test/cpp_headers/base64.o 00:04:24.017 CXX test/cpp_headers/bdev.o 00:04:24.017 CXX test/cpp_headers/bdev_module.o 00:04:24.017 LINK test_dma 00:04:24.017 CC test/env/vtophys/vtophys.o 00:04:24.017 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:24.017 LINK reactor 00:04:24.276 CC test/event/reactor_perf/reactor_perf.o 00:04:24.276 CXX test/cpp_headers/bdev_zone.o 00:04:24.276 CXX test/cpp_headers/bit_array.o 00:04:24.276 CC test/event/app_repeat/app_repeat.o 00:04:24.276 LINK env_dpdk_post_init 00:04:24.276 LINK vtophys 00:04:24.276 CXX test/cpp_headers/bit_pool.o 00:04:24.276 CXX test/cpp_headers/blob_bdev.o 00:04:24.276 LINK reactor_perf 00:04:24.277 CXX test/cpp_headers/blobfs_bdev.o 00:04:24.277 LINK app_repeat 00:04:24.535 LINK blobcli 00:04:24.535 CXX test/cpp_headers/blobfs.o 00:04:24.535 CC test/env/memory/memory_ut.o 00:04:24.535 CXX test/cpp_headers/blob.o 00:04:24.535 CC test/env/pci/pci_ut.o 00:04:24.535 CC test/event/scheduler/scheduler.o 00:04:24.535 LINK iscsi_fuzz 00:04:24.535 CXX test/cpp_headers/conf.o 00:04:24.535 CC examples/ioat/perf/perf.o 00:04:24.535 CC examples/ioat/verify/verify.o 00:04:24.794 CXX test/cpp_headers/config.o 00:04:24.794 CXX test/cpp_headers/cpuset.o 00:04:24.794 CC examples/nvme/hello_world/hello_world.o 00:04:24.794 CXX test/cpp_headers/crc16.o 00:04:24.794 LINK scheduler 00:04:24.794 CC test/lvol/esnap/esnap.o 00:04:24.794 CC test/nvme/aer/aer.o 00:04:24.794 LINK verify 00:04:24.794 LINK ioat_perf 00:04:24.794 LINK pci_ut 00:04:25.053 CXX test/cpp_headers/crc32.o 00:04:25.053 CXX test/cpp_headers/crc64.o 00:04:25.053 CXX test/cpp_headers/dif.o 00:04:25.053 LINK memory_ut 00:04:25.053 CXX test/cpp_headers/dma.o 00:04:25.053 LINK hello_world 00:04:25.053 CC examples/sock/hello_world/hello_sock.o 00:04:25.053 CXX test/cpp_headers/endian.o 00:04:25.053 LINK aer 00:04:25.053 CXX test/cpp_headers/env_dpdk.o 00:04:25.053 CXX test/cpp_headers/env.o 00:04:25.313 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:25.313 CC examples/nvme/reconnect/reconnect.o 00:04:25.313 CC test/rpc_client/rpc_client_test.o 00:04:25.313 CC examples/nvme/arbitration/arbitration.o 00:04:25.313 LINK hello_sock 00:04:25.313 CXX test/cpp_headers/event.o 00:04:25.313 CC test/nvme/reset/reset.o 00:04:25.313 CC test/nvme/sgl/sgl.o 00:04:25.313 CC examples/nvme/hotplug/hotplug.o 00:04:25.313 LINK rpc_client_test 00:04:25.572 CXX test/cpp_headers/fd_group.o 00:04:25.572 CXX test/cpp_headers/fd.o 00:04:25.572 CC examples/vmd/lsvmd/lsvmd.o 00:04:25.572 LINK reconnect 00:04:25.572 LINK arbitration 00:04:25.572 LINK reset 00:04:25.572 LINK hotplug 00:04:25.572 LINK nvme_manage 00:04:25.572 LINK sgl 00:04:25.572 LINK lsvmd 00:04:25.572 CXX test/cpp_headers/file.o 00:04:25.832 CXX test/cpp_headers/ftl.o 00:04:25.832 CC examples/nvme/abort/abort.o 00:04:25.832 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:25.832 CC test/nvme/e2edp/nvme_dp.o 00:04:25.832 CC test/nvme/overhead/overhead.o 00:04:25.832 CC examples/nvmf/nvmf/nvmf.o 00:04:25.832 CC test/nvme/err_injection/err_injection.o 00:04:25.832 CXX test/cpp_headers/gpt_spec.o 00:04:25.832 CC examples/vmd/led/led.o 00:04:25.832 CC test/nvme/startup/startup.o 00:04:26.091 LINK cmb_copy 00:04:26.091 LINK err_injection 00:04:26.091 LINK led 00:04:26.091 CXX test/cpp_headers/hexlify.o 00:04:26.091 LINK nvme_dp 00:04:26.091 LINK startup 00:04:26.091 LINK overhead 00:04:26.091 LINK nvmf 00:04:26.091 LINK abort 00:04:26.091 CXX test/cpp_headers/histogram_data.o 00:04:26.350 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:26.350 CXX test/cpp_headers/idxd.o 00:04:26.350 CC test/nvme/simple_copy/simple_copy.o 00:04:26.350 CC test/nvme/reserve/reserve.o 00:04:26.350 LINK pmr_persistence 00:04:26.350 CC examples/util/zipf/zipf.o 00:04:26.350 CC test/nvme/connect_stress/connect_stress.o 00:04:26.350 CC test/nvme/boot_partition/boot_partition.o 00:04:26.350 CXX test/cpp_headers/idxd_spec.o 00:04:26.350 CC test/nvme/compliance/nvme_compliance.o 00:04:26.350 CC examples/thread/thread/thread_ex.o 00:04:26.608 LINK reserve 00:04:26.608 LINK simple_copy 00:04:26.608 CXX test/cpp_headers/init.o 00:04:26.608 LINK zipf 00:04:26.608 LINK boot_partition 00:04:26.608 LINK connect_stress 00:04:26.608 CC test/nvme/fused_ordering/fused_ordering.o 00:04:26.608 CXX test/cpp_headers/ioat.o 00:04:26.608 CXX test/cpp_headers/ioat_spec.o 00:04:26.608 CXX test/cpp_headers/iscsi_spec.o 00:04:26.608 LINK thread 00:04:26.867 CXX test/cpp_headers/json.o 00:04:26.867 LINK nvme_compliance 00:04:26.867 CXX test/cpp_headers/jsonrpc.o 00:04:26.867 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:26.867 CXX test/cpp_headers/likely.o 00:04:26.867 LINK fused_ordering 00:04:26.867 CXX test/cpp_headers/log.o 00:04:26.867 CXX test/cpp_headers/lvol.o 00:04:26.867 CC test/nvme/cuse/cuse.o 00:04:26.867 CC test/nvme/fdp/fdp.o 00:04:26.867 LINK doorbell_aers 00:04:27.126 CC examples/idxd/perf/perf.o 00:04:27.126 CXX test/cpp_headers/memory.o 00:04:27.126 CXX test/cpp_headers/mmio.o 00:04:27.126 CXX test/cpp_headers/nbd.o 00:04:27.126 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:27.126 CXX test/cpp_headers/notify.o 00:04:27.126 CC test/thread/poller_perf/poller_perf.o 00:04:27.126 CXX test/cpp_headers/nvme.o 00:04:27.126 LINK fdp 00:04:27.126 CXX test/cpp_headers/nvme_intel.o 00:04:27.384 LINK interrupt_tgt 00:04:27.384 CXX test/cpp_headers/nvme_ocssd.o 00:04:27.384 LINK poller_perf 00:04:27.384 LINK idxd_perf 00:04:27.384 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:27.384 CXX test/cpp_headers/nvme_spec.o 00:04:27.384 CXX test/cpp_headers/nvme_zns.o 00:04:27.384 CXX test/cpp_headers/nvmf_cmd.o 00:04:27.384 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:27.646 CXX test/cpp_headers/nvmf.o 00:04:27.646 CXX test/cpp_headers/nvmf_spec.o 00:04:27.646 CXX test/cpp_headers/nvmf_transport.o 00:04:27.646 CXX test/cpp_headers/opal.o 00:04:27.646 CXX test/cpp_headers/opal_spec.o 00:04:27.646 CXX test/cpp_headers/pci_ids.o 00:04:27.646 CXX test/cpp_headers/pipe.o 00:04:27.646 CXX test/cpp_headers/queue.o 00:04:27.646 CXX test/cpp_headers/reduce.o 00:04:27.646 CXX test/cpp_headers/rpc.o 00:04:27.646 CXX test/cpp_headers/scheduler.o 00:04:27.906 CXX test/cpp_headers/scsi.o 00:04:27.906 CXX test/cpp_headers/scsi_spec.o 00:04:27.906 CXX test/cpp_headers/sock.o 00:04:27.906 CXX test/cpp_headers/stdinc.o 00:04:27.906 CXX test/cpp_headers/string.o 00:04:27.906 CXX test/cpp_headers/thread.o 00:04:27.906 LINK cuse 00:04:27.906 CXX test/cpp_headers/trace.o 00:04:27.906 CXX test/cpp_headers/trace_parser.o 00:04:28.165 CXX test/cpp_headers/tree.o 00:04:28.165 CXX test/cpp_headers/ublk.o 00:04:28.165 CXX test/cpp_headers/util.o 00:04:28.165 CXX test/cpp_headers/uuid.o 00:04:28.165 CXX test/cpp_headers/version.o 00:04:28.165 CXX test/cpp_headers/vfio_user_pci.o 00:04:28.165 CXX test/cpp_headers/vfio_user_spec.o 00:04:28.165 CXX test/cpp_headers/vhost.o 00:04:28.165 CXX test/cpp_headers/vmd.o 00:04:28.165 CXX test/cpp_headers/zipf.o 00:04:28.165 CXX test/cpp_headers/xor.o 00:04:29.547 LINK esnap 00:04:32.886 00:04:32.886 real 0m53.679s 00:04:32.887 user 4m59.314s 00:04:32.887 sys 1m8.681s 00:04:32.887 19:25:19 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:04:32.887 19:25:19 -- common/autotest_common.sh@10 -- $ set +x 00:04:32.887 ************************************ 00:04:32.887 END TEST make 00:04:32.887 ************************************ 00:04:33.146 19:25:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:33.146 19:25:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:33.146 19:25:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:33.146 19:25:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:33.146 19:25:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:33.146 19:25:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:33.146 19:25:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:33.146 19:25:19 -- scripts/common.sh@335 -- # IFS=.-: 00:04:33.146 19:25:19 -- scripts/common.sh@335 -- # read -ra ver1 00:04:33.146 19:25:19 -- scripts/common.sh@336 -- # IFS=.-: 00:04:33.146 19:25:19 -- scripts/common.sh@336 -- # read -ra ver2 00:04:33.146 19:25:19 -- scripts/common.sh@337 -- # local 'op=<' 00:04:33.146 19:25:19 -- scripts/common.sh@339 -- # ver1_l=2 00:04:33.146 19:25:19 -- scripts/common.sh@340 -- # ver2_l=1 00:04:33.146 19:25:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:33.146 19:25:19 -- scripts/common.sh@343 -- # case "$op" in 00:04:33.146 19:25:19 -- scripts/common.sh@344 -- # : 1 00:04:33.146 19:25:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:33.146 19:25:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:33.146 19:25:19 -- scripts/common.sh@364 -- # decimal 1 00:04:33.146 19:25:19 -- scripts/common.sh@352 -- # local d=1 00:04:33.146 19:25:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:33.146 19:25:19 -- scripts/common.sh@354 -- # echo 1 00:04:33.146 19:25:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:33.146 19:25:19 -- scripts/common.sh@365 -- # decimal 2 00:04:33.146 19:25:19 -- scripts/common.sh@352 -- # local d=2 00:04:33.146 19:25:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:33.146 19:25:19 -- scripts/common.sh@354 -- # echo 2 00:04:33.146 19:25:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:33.146 19:25:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:33.146 19:25:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:33.146 19:25:19 -- scripts/common.sh@367 -- # return 0 00:04:33.146 19:25:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:33.146 19:25:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:33.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.146 --rc genhtml_branch_coverage=1 00:04:33.146 --rc genhtml_function_coverage=1 00:04:33.146 --rc genhtml_legend=1 00:04:33.146 --rc geninfo_all_blocks=1 00:04:33.146 --rc geninfo_unexecuted_blocks=1 00:04:33.146 00:04:33.146 ' 00:04:33.146 19:25:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:33.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.146 --rc genhtml_branch_coverage=1 00:04:33.146 --rc genhtml_function_coverage=1 00:04:33.146 --rc genhtml_legend=1 00:04:33.146 --rc geninfo_all_blocks=1 00:04:33.146 --rc geninfo_unexecuted_blocks=1 00:04:33.146 00:04:33.146 ' 00:04:33.146 19:25:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:33.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.146 --rc genhtml_branch_coverage=1 00:04:33.146 --rc genhtml_function_coverage=1 00:04:33.146 --rc genhtml_legend=1 00:04:33.146 --rc geninfo_all_blocks=1 00:04:33.146 --rc geninfo_unexecuted_blocks=1 00:04:33.146 00:04:33.146 ' 00:04:33.146 19:25:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:33.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.146 --rc genhtml_branch_coverage=1 00:04:33.146 --rc genhtml_function_coverage=1 00:04:33.146 --rc genhtml_legend=1 00:04:33.146 --rc geninfo_all_blocks=1 00:04:33.146 --rc geninfo_unexecuted_blocks=1 00:04:33.146 00:04:33.146 ' 00:04:33.146 19:25:19 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:33.146 19:25:19 -- nvmf/common.sh@7 -- # uname -s 00:04:33.146 19:25:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:33.146 19:25:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:33.146 19:25:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:33.146 19:25:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:33.146 19:25:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:33.146 19:25:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:33.146 19:25:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:33.146 19:25:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:33.146 19:25:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:33.146 19:25:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:33.146 19:25:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:04:33.146 19:25:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:04:33.146 19:25:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:33.146 19:25:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:33.146 19:25:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:33.146 19:25:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:33.146 19:25:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:33.146 19:25:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:33.146 19:25:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:33.146 19:25:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.146 19:25:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.146 19:25:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.146 19:25:19 -- paths/export.sh@5 -- # export PATH 00:04:33.146 19:25:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.146 19:25:19 -- nvmf/common.sh@46 -- # : 0 00:04:33.146 19:25:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:33.146 19:25:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:33.146 19:25:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:33.146 19:25:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:33.146 19:25:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:33.146 19:25:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:33.146 19:25:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:33.146 19:25:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:33.146 19:25:19 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:33.146 19:25:19 -- spdk/autotest.sh@32 -- # uname -s 00:04:33.146 19:25:19 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:33.146 19:25:19 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:33.146 19:25:19 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:33.146 19:25:19 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:33.146 19:25:19 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:33.146 19:25:19 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:33.146 19:25:19 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:33.146 19:25:19 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:33.146 19:25:19 -- spdk/autotest.sh@48 -- # udevadm_pid=61517 00:04:33.146 19:25:19 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:33.146 19:25:19 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:04:33.146 19:25:19 -- spdk/autotest.sh@54 -- # echo 61520 00:04:33.146 19:25:19 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:33.146 19:25:19 -- spdk/autotest.sh@56 -- # echo 61521 00:04:33.146 19:25:19 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:33.146 19:25:19 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:04:33.146 19:25:19 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:33.146 19:25:19 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:33.146 19:25:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:33.146 19:25:19 -- common/autotest_common.sh@10 -- # set +x 00:04:33.146 19:25:19 -- spdk/autotest.sh@70 -- # create_test_list 00:04:33.146 19:25:19 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:33.146 19:25:19 -- common/autotest_common.sh@10 -- # set +x 00:04:33.146 19:25:20 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:33.146 19:25:20 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:33.146 19:25:20 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:04:33.146 19:25:20 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:33.146 19:25:20 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:04:33.146 19:25:20 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:33.147 19:25:20 -- common/autotest_common.sh@1450 -- # uname 00:04:33.406 19:25:20 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:04:33.406 19:25:20 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:33.406 19:25:20 -- common/autotest_common.sh@1470 -- # uname 00:04:33.406 19:25:20 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:04:33.406 19:25:20 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:04:33.406 19:25:20 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:33.406 lcov: LCOV version 1.15 00:04:33.406 19:25:20 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:41.519 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:41.519 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:41.519 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:41.519 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:41.519 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:41.519 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:59.605 19:25:44 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:04:59.605 19:25:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:59.605 19:25:44 -- common/autotest_common.sh@10 -- # set +x 00:04:59.605 19:25:44 -- spdk/autotest.sh@89 -- # rm -f 00:04:59.605 19:25:44 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:59.605 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:59.605 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:59.605 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:04:59.605 19:25:45 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:04:59.605 19:25:45 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:59.605 19:25:45 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:59.605 19:25:45 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:59.605 19:25:45 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:59.605 19:25:45 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:59.605 19:25:45 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:59.605 19:25:45 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:59.605 19:25:45 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:59.605 19:25:45 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:59.605 19:25:45 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:59.605 19:25:45 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:59.605 19:25:45 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:59.605 19:25:45 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:59.605 19:25:45 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:59.605 19:25:45 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:59.605 19:25:45 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:59.605 19:25:45 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:59.605 19:25:45 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:59.605 19:25:45 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:59.605 19:25:45 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:59.605 19:25:45 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:59.605 19:25:45 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:59.605 19:25:45 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:59.605 19:25:45 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:04:59.605 19:25:45 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:04:59.605 19:25:45 -- spdk/autotest.sh@108 -- # grep -v p 00:04:59.605 19:25:45 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:59.605 19:25:45 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:59.605 19:25:45 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:04:59.605 19:25:45 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:59.605 19:25:45 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:59.605 No valid GPT data, bailing 00:04:59.605 19:25:45 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:59.605 19:25:45 -- scripts/common.sh@393 -- # pt= 00:04:59.605 19:25:45 -- scripts/common.sh@394 -- # return 1 00:04:59.605 19:25:45 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:59.605 1+0 records in 00:04:59.605 1+0 records out 00:04:59.605 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00447813 s, 234 MB/s 00:04:59.605 19:25:45 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:59.605 19:25:45 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:59.605 19:25:45 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:04:59.605 19:25:45 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:04:59.605 19:25:45 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:59.605 No valid GPT data, bailing 00:04:59.605 19:25:45 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:59.605 19:25:45 -- scripts/common.sh@393 -- # pt= 00:04:59.605 19:25:45 -- scripts/common.sh@394 -- # return 1 00:04:59.605 19:25:45 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:59.605 1+0 records in 00:04:59.605 1+0 records out 00:04:59.605 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0036851 s, 285 MB/s 00:04:59.605 19:25:45 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:59.605 19:25:45 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:59.605 19:25:45 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n2 00:04:59.605 19:25:45 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:04:59.605 19:25:45 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:59.605 No valid GPT data, bailing 00:04:59.605 19:25:45 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:59.605 19:25:45 -- scripts/common.sh@393 -- # pt= 00:04:59.605 19:25:45 -- scripts/common.sh@394 -- # return 1 00:04:59.605 19:25:45 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:59.605 1+0 records in 00:04:59.605 1+0 records out 00:04:59.605 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00472004 s, 222 MB/s 00:04:59.605 19:25:45 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:59.605 19:25:45 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:59.605 19:25:45 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n3 00:04:59.605 19:25:45 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:04:59.605 19:25:45 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:59.605 No valid GPT data, bailing 00:04:59.605 19:25:45 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:59.605 19:25:45 -- scripts/common.sh@393 -- # pt= 00:04:59.605 19:25:45 -- scripts/common.sh@394 -- # return 1 00:04:59.605 19:25:45 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:59.605 1+0 records in 00:04:59.605 1+0 records out 00:04:59.605 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00486503 s, 216 MB/s 00:04:59.605 19:25:45 -- spdk/autotest.sh@116 -- # sync 00:04:59.605 19:25:45 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:59.605 19:25:45 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:59.605 19:25:45 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:00.983 19:25:47 -- spdk/autotest.sh@122 -- # uname -s 00:05:00.983 19:25:47 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:05:00.983 19:25:47 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:00.983 19:25:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:00.983 19:25:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:00.983 19:25:47 -- common/autotest_common.sh@10 -- # set +x 00:05:00.983 ************************************ 00:05:00.983 START TEST setup.sh 00:05:00.983 ************************************ 00:05:00.983 19:25:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:00.983 * Looking for test storage... 00:05:00.983 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:00.983 19:25:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:00.983 19:25:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:00.983 19:25:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:00.983 19:25:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:00.983 19:25:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:00.983 19:25:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:00.983 19:25:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:00.983 19:25:47 -- scripts/common.sh@335 -- # IFS=.-: 00:05:00.983 19:25:47 -- scripts/common.sh@335 -- # read -ra ver1 00:05:00.983 19:25:47 -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.983 19:25:47 -- scripts/common.sh@336 -- # read -ra ver2 00:05:00.983 19:25:47 -- scripts/common.sh@337 -- # local 'op=<' 00:05:00.983 19:25:47 -- scripts/common.sh@339 -- # ver1_l=2 00:05:00.983 19:25:47 -- scripts/common.sh@340 -- # ver2_l=1 00:05:00.983 19:25:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:00.983 19:25:47 -- scripts/common.sh@343 -- # case "$op" in 00:05:00.983 19:25:47 -- scripts/common.sh@344 -- # : 1 00:05:00.983 19:25:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:00.983 19:25:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.983 19:25:47 -- scripts/common.sh@364 -- # decimal 1 00:05:00.984 19:25:47 -- scripts/common.sh@352 -- # local d=1 00:05:00.984 19:25:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.984 19:25:47 -- scripts/common.sh@354 -- # echo 1 00:05:00.984 19:25:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:00.984 19:25:47 -- scripts/common.sh@365 -- # decimal 2 00:05:00.984 19:25:47 -- scripts/common.sh@352 -- # local d=2 00:05:00.984 19:25:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.984 19:25:47 -- scripts/common.sh@354 -- # echo 2 00:05:00.984 19:25:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:00.984 19:25:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:00.984 19:25:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:00.984 19:25:47 -- scripts/common.sh@367 -- # return 0 00:05:00.984 19:25:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.984 19:25:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:00.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.984 --rc genhtml_branch_coverage=1 00:05:00.984 --rc genhtml_function_coverage=1 00:05:00.984 --rc genhtml_legend=1 00:05:00.984 --rc geninfo_all_blocks=1 00:05:00.984 --rc geninfo_unexecuted_blocks=1 00:05:00.984 00:05:00.984 ' 00:05:00.984 19:25:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:00.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.984 --rc genhtml_branch_coverage=1 00:05:00.984 --rc genhtml_function_coverage=1 00:05:00.984 --rc genhtml_legend=1 00:05:00.984 --rc geninfo_all_blocks=1 00:05:00.984 --rc geninfo_unexecuted_blocks=1 00:05:00.984 00:05:00.984 ' 00:05:00.984 19:25:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:00.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.984 --rc genhtml_branch_coverage=1 00:05:00.984 --rc genhtml_function_coverage=1 00:05:00.984 --rc genhtml_legend=1 00:05:00.984 --rc geninfo_all_blocks=1 00:05:00.984 --rc geninfo_unexecuted_blocks=1 00:05:00.984 00:05:00.984 ' 00:05:00.984 19:25:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:00.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.984 --rc genhtml_branch_coverage=1 00:05:00.984 --rc genhtml_function_coverage=1 00:05:00.984 --rc genhtml_legend=1 00:05:00.984 --rc geninfo_all_blocks=1 00:05:00.984 --rc geninfo_unexecuted_blocks=1 00:05:00.984 00:05:00.984 ' 00:05:00.984 19:25:47 -- setup/test-setup.sh@10 -- # uname -s 00:05:00.984 19:25:47 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:00.984 19:25:47 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:00.984 19:25:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:00.984 19:25:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:00.984 19:25:47 -- common/autotest_common.sh@10 -- # set +x 00:05:00.984 ************************************ 00:05:00.984 START TEST acl 00:05:00.984 ************************************ 00:05:00.984 19:25:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:00.984 * Looking for test storage... 00:05:00.984 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:00.984 19:25:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:00.984 19:25:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:00.984 19:25:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:00.984 19:25:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:00.984 19:25:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:00.984 19:25:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:00.984 19:25:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:00.984 19:25:47 -- scripts/common.sh@335 -- # IFS=.-: 00:05:00.984 19:25:47 -- scripts/common.sh@335 -- # read -ra ver1 00:05:00.984 19:25:47 -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.984 19:25:47 -- scripts/common.sh@336 -- # read -ra ver2 00:05:00.984 19:25:47 -- scripts/common.sh@337 -- # local 'op=<' 00:05:00.984 19:25:47 -- scripts/common.sh@339 -- # ver1_l=2 00:05:00.984 19:25:47 -- scripts/common.sh@340 -- # ver2_l=1 00:05:00.984 19:25:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:00.984 19:25:47 -- scripts/common.sh@343 -- # case "$op" in 00:05:00.984 19:25:47 -- scripts/common.sh@344 -- # : 1 00:05:00.984 19:25:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:00.984 19:25:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.984 19:25:47 -- scripts/common.sh@364 -- # decimal 1 00:05:00.984 19:25:47 -- scripts/common.sh@352 -- # local d=1 00:05:00.984 19:25:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.984 19:25:47 -- scripts/common.sh@354 -- # echo 1 00:05:00.984 19:25:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:00.984 19:25:47 -- scripts/common.sh@365 -- # decimal 2 00:05:00.984 19:25:47 -- scripts/common.sh@352 -- # local d=2 00:05:00.984 19:25:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.984 19:25:47 -- scripts/common.sh@354 -- # echo 2 00:05:00.984 19:25:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:00.984 19:25:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:00.984 19:25:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:00.984 19:25:47 -- scripts/common.sh@367 -- # return 0 00:05:00.984 19:25:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.984 19:25:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:00.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.984 --rc genhtml_branch_coverage=1 00:05:00.984 --rc genhtml_function_coverage=1 00:05:00.984 --rc genhtml_legend=1 00:05:00.984 --rc geninfo_all_blocks=1 00:05:00.984 --rc geninfo_unexecuted_blocks=1 00:05:00.984 00:05:00.984 ' 00:05:00.984 19:25:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:00.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.984 --rc genhtml_branch_coverage=1 00:05:00.984 --rc genhtml_function_coverage=1 00:05:00.984 --rc genhtml_legend=1 00:05:00.984 --rc geninfo_all_blocks=1 00:05:00.984 --rc geninfo_unexecuted_blocks=1 00:05:00.984 00:05:00.984 ' 00:05:00.984 19:25:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:00.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.984 --rc genhtml_branch_coverage=1 00:05:00.984 --rc genhtml_function_coverage=1 00:05:00.984 --rc genhtml_legend=1 00:05:00.984 --rc geninfo_all_blocks=1 00:05:00.984 --rc geninfo_unexecuted_blocks=1 00:05:00.984 00:05:00.984 ' 00:05:00.984 19:25:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:00.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.984 --rc genhtml_branch_coverage=1 00:05:00.984 --rc genhtml_function_coverage=1 00:05:00.984 --rc genhtml_legend=1 00:05:00.984 --rc geninfo_all_blocks=1 00:05:00.984 --rc geninfo_unexecuted_blocks=1 00:05:00.984 00:05:00.984 ' 00:05:00.984 19:25:47 -- setup/acl.sh@10 -- # get_zoned_devs 00:05:00.984 19:25:47 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:00.984 19:25:47 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:00.984 19:25:47 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:00.984 19:25:47 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:00.984 19:25:47 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:00.984 19:25:47 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:00.984 19:25:47 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:00.984 19:25:47 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:00.984 19:25:47 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:00.984 19:25:47 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:00.984 19:25:47 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:00.984 19:25:47 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:00.984 19:25:47 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:00.984 19:25:47 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:00.984 19:25:47 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:00.984 19:25:47 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:00.984 19:25:47 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:00.984 19:25:47 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:00.984 19:25:47 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:00.984 19:25:47 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:00.984 19:25:47 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:00.984 19:25:47 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:00.984 19:25:47 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:00.984 19:25:47 -- setup/acl.sh@12 -- # devs=() 00:05:00.984 19:25:47 -- setup/acl.sh@12 -- # declare -a devs 00:05:00.984 19:25:47 -- setup/acl.sh@13 -- # drivers=() 00:05:00.984 19:25:47 -- setup/acl.sh@13 -- # declare -A drivers 00:05:00.984 19:25:47 -- setup/acl.sh@51 -- # setup reset 00:05:00.984 19:25:47 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:00.984 19:25:47 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:01.920 19:25:48 -- setup/acl.sh@52 -- # collect_setup_devs 00:05:01.920 19:25:48 -- setup/acl.sh@16 -- # local dev driver 00:05:01.920 19:25:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:01.920 19:25:48 -- setup/acl.sh@15 -- # setup output status 00:05:01.920 19:25:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.920 19:25:48 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:01.920 Hugepages 00:05:01.920 node hugesize free / total 00:05:01.920 19:25:48 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:01.920 19:25:48 -- setup/acl.sh@19 -- # continue 00:05:01.920 19:25:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:01.920 00:05:01.920 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:01.920 19:25:48 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:01.920 19:25:48 -- setup/acl.sh@19 -- # continue 00:05:01.920 19:25:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:02.179 19:25:48 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:02.179 19:25:48 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:02.179 19:25:48 -- setup/acl.sh@20 -- # continue 00:05:02.179 19:25:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:02.179 19:25:48 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:05:02.179 19:25:48 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:02.179 19:25:48 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:02.179 19:25:48 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:02.179 19:25:48 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:02.179 19:25:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:02.179 19:25:48 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:05:02.179 19:25:48 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:02.179 19:25:48 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:02.179 19:25:48 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:02.179 19:25:48 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:02.179 19:25:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:02.179 19:25:48 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:05:02.179 19:25:48 -- setup/acl.sh@54 -- # run_test denied denied 00:05:02.179 19:25:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:02.179 19:25:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:02.179 19:25:48 -- common/autotest_common.sh@10 -- # set +x 00:05:02.179 ************************************ 00:05:02.179 START TEST denied 00:05:02.180 ************************************ 00:05:02.180 19:25:48 -- common/autotest_common.sh@1114 -- # denied 00:05:02.180 19:25:48 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:05:02.180 19:25:49 -- setup/acl.sh@38 -- # setup output config 00:05:02.180 19:25:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.180 19:25:49 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:02.180 19:25:49 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:05:03.116 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:05:03.116 19:25:49 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:05:03.116 19:25:49 -- setup/acl.sh@28 -- # local dev driver 00:05:03.116 19:25:49 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:03.116 19:25:49 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:05:03.116 19:25:49 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:05:03.116 19:25:49 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:03.116 19:25:49 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:03.116 19:25:49 -- setup/acl.sh@41 -- # setup reset 00:05:03.116 19:25:49 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:03.116 19:25:49 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:03.737 00:05:03.737 real 0m1.461s 00:05:03.737 user 0m0.581s 00:05:03.737 sys 0m0.844s 00:05:03.737 19:25:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:03.737 ************************************ 00:05:03.737 END TEST denied 00:05:03.737 19:25:50 -- common/autotest_common.sh@10 -- # set +x 00:05:03.737 ************************************ 00:05:03.737 19:25:50 -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:03.737 19:25:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:03.737 19:25:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:03.737 19:25:50 -- common/autotest_common.sh@10 -- # set +x 00:05:03.737 ************************************ 00:05:03.738 START TEST allowed 00:05:03.738 ************************************ 00:05:03.738 19:25:50 -- common/autotest_common.sh@1114 -- # allowed 00:05:03.738 19:25:50 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:05:03.738 19:25:50 -- setup/acl.sh@45 -- # setup output config 00:05:03.738 19:25:50 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:05:03.738 19:25:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.738 19:25:50 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:04.675 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:04.675 19:25:51 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:05:04.675 19:25:51 -- setup/acl.sh@28 -- # local dev driver 00:05:04.675 19:25:51 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:04.675 19:25:51 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:05:04.675 19:25:51 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:05:04.675 19:25:51 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:04.675 19:25:51 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:04.675 19:25:51 -- setup/acl.sh@48 -- # setup reset 00:05:04.675 19:25:51 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:04.675 19:25:51 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:05.244 00:05:05.244 real 0m1.538s 00:05:05.244 user 0m0.677s 00:05:05.244 sys 0m0.872s 00:05:05.244 19:25:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:05.244 19:25:52 -- common/autotest_common.sh@10 -- # set +x 00:05:05.244 ************************************ 00:05:05.244 END TEST allowed 00:05:05.244 ************************************ 00:05:05.244 00:05:05.244 real 0m4.410s 00:05:05.244 user 0m1.884s 00:05:05.244 sys 0m2.522s 00:05:05.244 19:25:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:05.244 19:25:52 -- common/autotest_common.sh@10 -- # set +x 00:05:05.244 ************************************ 00:05:05.244 END TEST acl 00:05:05.244 ************************************ 00:05:05.244 19:25:52 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:05.244 19:25:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.244 19:25:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.244 19:25:52 -- common/autotest_common.sh@10 -- # set +x 00:05:05.244 ************************************ 00:05:05.244 START TEST hugepages 00:05:05.244 ************************************ 00:05:05.244 19:25:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:05.504 * Looking for test storage... 00:05:05.504 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:05.504 19:25:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:05.504 19:25:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:05.504 19:25:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:05.504 19:25:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:05.504 19:25:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:05.504 19:25:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:05.504 19:25:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:05.504 19:25:52 -- scripts/common.sh@335 -- # IFS=.-: 00:05:05.504 19:25:52 -- scripts/common.sh@335 -- # read -ra ver1 00:05:05.504 19:25:52 -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.504 19:25:52 -- scripts/common.sh@336 -- # read -ra ver2 00:05:05.504 19:25:52 -- scripts/common.sh@337 -- # local 'op=<' 00:05:05.504 19:25:52 -- scripts/common.sh@339 -- # ver1_l=2 00:05:05.504 19:25:52 -- scripts/common.sh@340 -- # ver2_l=1 00:05:05.504 19:25:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:05.504 19:25:52 -- scripts/common.sh@343 -- # case "$op" in 00:05:05.504 19:25:52 -- scripts/common.sh@344 -- # : 1 00:05:05.504 19:25:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:05.504 19:25:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.504 19:25:52 -- scripts/common.sh@364 -- # decimal 1 00:05:05.504 19:25:52 -- scripts/common.sh@352 -- # local d=1 00:05:05.504 19:25:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.504 19:25:52 -- scripts/common.sh@354 -- # echo 1 00:05:05.504 19:25:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:05.504 19:25:52 -- scripts/common.sh@365 -- # decimal 2 00:05:05.504 19:25:52 -- scripts/common.sh@352 -- # local d=2 00:05:05.504 19:25:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.504 19:25:52 -- scripts/common.sh@354 -- # echo 2 00:05:05.504 19:25:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:05.504 19:25:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:05.504 19:25:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:05.504 19:25:52 -- scripts/common.sh@367 -- # return 0 00:05:05.504 19:25:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.504 19:25:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:05.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.504 --rc genhtml_branch_coverage=1 00:05:05.504 --rc genhtml_function_coverage=1 00:05:05.504 --rc genhtml_legend=1 00:05:05.504 --rc geninfo_all_blocks=1 00:05:05.504 --rc geninfo_unexecuted_blocks=1 00:05:05.504 00:05:05.504 ' 00:05:05.504 19:25:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:05.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.504 --rc genhtml_branch_coverage=1 00:05:05.504 --rc genhtml_function_coverage=1 00:05:05.505 --rc genhtml_legend=1 00:05:05.505 --rc geninfo_all_blocks=1 00:05:05.505 --rc geninfo_unexecuted_blocks=1 00:05:05.505 00:05:05.505 ' 00:05:05.505 19:25:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:05.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.505 --rc genhtml_branch_coverage=1 00:05:05.505 --rc genhtml_function_coverage=1 00:05:05.505 --rc genhtml_legend=1 00:05:05.505 --rc geninfo_all_blocks=1 00:05:05.505 --rc geninfo_unexecuted_blocks=1 00:05:05.505 00:05:05.505 ' 00:05:05.505 19:25:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:05.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.505 --rc genhtml_branch_coverage=1 00:05:05.505 --rc genhtml_function_coverage=1 00:05:05.505 --rc genhtml_legend=1 00:05:05.505 --rc geninfo_all_blocks=1 00:05:05.505 --rc geninfo_unexecuted_blocks=1 00:05:05.505 00:05:05.505 ' 00:05:05.505 19:25:52 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:05.505 19:25:52 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:05.505 19:25:52 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:05.505 19:25:52 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:05.505 19:25:52 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:05.505 19:25:52 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:05.505 19:25:52 -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:05.505 19:25:52 -- setup/common.sh@18 -- # local node= 00:05:05.505 19:25:52 -- setup/common.sh@19 -- # local var val 00:05:05.505 19:25:52 -- setup/common.sh@20 -- # local mem_f mem 00:05:05.505 19:25:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.505 19:25:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.505 19:25:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.505 19:25:52 -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.505 19:25:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.505 19:25:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 4688796 kB' 'MemAvailable: 7324032 kB' 'Buffers: 2684 kB' 'Cached: 2836932 kB' 'SwapCached: 0 kB' 'Active: 496676 kB' 'Inactive: 2459920 kB' 'Active(anon): 127492 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2459920 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 316 kB' 'Writeback: 0 kB' 'AnonPages: 118660 kB' 'Mapped: 51060 kB' 'Shmem: 10512 kB' 'KReclaimable: 86156 kB' 'Slab: 187428 kB' 'SReclaimable: 86156 kB' 'SUnreclaim: 101272 kB' 'KernelStack: 6624 kB' 'PageTables: 4600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12411008 kB' 'Committed_AS: 320360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55304 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.505 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.505 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # continue 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.506 19:25:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.506 19:25:52 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:05.506 19:25:52 -- setup/common.sh@33 -- # echo 2048 00:05:05.506 19:25:52 -- setup/common.sh@33 -- # return 0 00:05:05.506 19:25:52 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:05.506 19:25:52 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:05.506 19:25:52 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:05.506 19:25:52 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:05.506 19:25:52 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:05.506 19:25:52 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:05.506 19:25:52 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:05.506 19:25:52 -- setup/hugepages.sh@207 -- # get_nodes 00:05:05.506 19:25:52 -- setup/hugepages.sh@27 -- # local node 00:05:05.506 19:25:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:05.506 19:25:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:05.506 19:25:52 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:05.506 19:25:52 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:05.506 19:25:52 -- setup/hugepages.sh@208 -- # clear_hp 00:05:05.506 19:25:52 -- setup/hugepages.sh@37 -- # local node hp 00:05:05.506 19:25:52 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:05.506 19:25:52 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:05.506 19:25:52 -- setup/hugepages.sh@41 -- # echo 0 00:05:05.506 19:25:52 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:05.506 19:25:52 -- setup/hugepages.sh@41 -- # echo 0 00:05:05.506 19:25:52 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:05.506 19:25:52 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:05.506 19:25:52 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:05.507 19:25:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.507 19:25:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.507 19:25:52 -- common/autotest_common.sh@10 -- # set +x 00:05:05.507 ************************************ 00:05:05.507 START TEST default_setup 00:05:05.507 ************************************ 00:05:05.507 19:25:52 -- common/autotest_common.sh@1114 -- # default_setup 00:05:05.507 19:25:52 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:05.507 19:25:52 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:05.507 19:25:52 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:05.507 19:25:52 -- setup/hugepages.sh@51 -- # shift 00:05:05.507 19:25:52 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:05.507 19:25:52 -- setup/hugepages.sh@52 -- # local node_ids 00:05:05.507 19:25:52 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:05.507 19:25:52 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:05.507 19:25:52 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:05.507 19:25:52 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:05.507 19:25:52 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:05.507 19:25:52 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:05.507 19:25:52 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:05.507 19:25:52 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:05.507 19:25:52 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:05.507 19:25:52 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:05.507 19:25:52 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:05.507 19:25:52 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:05.507 19:25:52 -- setup/hugepages.sh@73 -- # return 0 00:05:05.507 19:25:52 -- setup/hugepages.sh@137 -- # setup output 00:05:05.507 19:25:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.507 19:25:52 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:06.446 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:06.446 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:06.446 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:06.446 19:25:53 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:06.446 19:25:53 -- setup/hugepages.sh@89 -- # local node 00:05:06.446 19:25:53 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:06.446 19:25:53 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:06.446 19:25:53 -- setup/hugepages.sh@92 -- # local surp 00:05:06.446 19:25:53 -- setup/hugepages.sh@93 -- # local resv 00:05:06.446 19:25:53 -- setup/hugepages.sh@94 -- # local anon 00:05:06.446 19:25:53 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:06.446 19:25:53 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:06.446 19:25:53 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:06.446 19:25:53 -- setup/common.sh@18 -- # local node= 00:05:06.446 19:25:53 -- setup/common.sh@19 -- # local var val 00:05:06.446 19:25:53 -- setup/common.sh@20 -- # local mem_f mem 00:05:06.446 19:25:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.446 19:25:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.446 19:25:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.446 19:25:53 -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.446 19:25:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.446 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.446 19:25:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6790436 kB' 'MemAvailable: 9425540 kB' 'Buffers: 2684 kB' 'Cached: 2836920 kB' 'SwapCached: 0 kB' 'Active: 498184 kB' 'Inactive: 2459928 kB' 'Active(anon): 129000 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2459928 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 120076 kB' 'Mapped: 50932 kB' 'Shmem: 10488 kB' 'KReclaimable: 85880 kB' 'Slab: 187192 kB' 'SReclaimable: 85880 kB' 'SUnreclaim: 101312 kB' 'KernelStack: 6544 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 322628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55272 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:05:06.446 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.446 19:25:53 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.446 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.446 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.446 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.446 19:25:53 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.447 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.447 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.448 19:25:53 -- setup/common.sh@33 -- # echo 0 00:05:06.448 19:25:53 -- setup/common.sh@33 -- # return 0 00:05:06.448 19:25:53 -- setup/hugepages.sh@97 -- # anon=0 00:05:06.448 19:25:53 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:06.448 19:25:53 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.448 19:25:53 -- setup/common.sh@18 -- # local node= 00:05:06.448 19:25:53 -- setup/common.sh@19 -- # local var val 00:05:06.448 19:25:53 -- setup/common.sh@20 -- # local mem_f mem 00:05:06.448 19:25:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.448 19:25:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.448 19:25:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.448 19:25:53 -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.448 19:25:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.448 19:25:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6790688 kB' 'MemAvailable: 9425792 kB' 'Buffers: 2684 kB' 'Cached: 2836920 kB' 'SwapCached: 0 kB' 'Active: 497964 kB' 'Inactive: 2459928 kB' 'Active(anon): 128780 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2459928 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119896 kB' 'Mapped: 50932 kB' 'Shmem: 10488 kB' 'KReclaimable: 85880 kB' 'Slab: 187192 kB' 'SReclaimable: 85880 kB' 'SUnreclaim: 101312 kB' 'KernelStack: 6544 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 322628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55240 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.448 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.448 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.449 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.711 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.711 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.711 19:25:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.711 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.711 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.711 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.711 19:25:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.711 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.711 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.711 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.711 19:25:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.711 19:25:53 -- setup/common.sh@33 -- # echo 0 00:05:06.711 19:25:53 -- setup/common.sh@33 -- # return 0 00:05:06.711 19:25:53 -- setup/hugepages.sh@99 -- # surp=0 00:05:06.711 19:25:53 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:06.711 19:25:53 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:06.711 19:25:53 -- setup/common.sh@18 -- # local node= 00:05:06.711 19:25:53 -- setup/common.sh@19 -- # local var val 00:05:06.711 19:25:53 -- setup/common.sh@20 -- # local mem_f mem 00:05:06.711 19:25:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.711 19:25:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.711 19:25:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.711 19:25:53 -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.711 19:25:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.711 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.712 19:25:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6790948 kB' 'MemAvailable: 9426052 kB' 'Buffers: 2684 kB' 'Cached: 2836920 kB' 'SwapCached: 0 kB' 'Active: 497896 kB' 'Inactive: 2459928 kB' 'Active(anon): 128712 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2459928 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119816 kB' 'Mapped: 50808 kB' 'Shmem: 10488 kB' 'KReclaimable: 85880 kB' 'Slab: 187192 kB' 'SReclaimable: 85880 kB' 'SUnreclaim: 101312 kB' 'KernelStack: 6560 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 322628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55240 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.712 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.712 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.713 19:25:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.713 19:25:53 -- setup/common.sh@33 -- # echo 0 00:05:06.713 19:25:53 -- setup/common.sh@33 -- # return 0 00:05:06.713 19:25:53 -- setup/hugepages.sh@100 -- # resv=0 00:05:06.713 nr_hugepages=1024 00:05:06.713 19:25:53 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:06.713 resv_hugepages=0 00:05:06.713 19:25:53 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:06.713 surplus_hugepages=0 00:05:06.713 19:25:53 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:06.713 anon_hugepages=0 00:05:06.713 19:25:53 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:06.713 19:25:53 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:06.713 19:25:53 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:06.713 19:25:53 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:06.713 19:25:53 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:06.713 19:25:53 -- setup/common.sh@18 -- # local node= 00:05:06.713 19:25:53 -- setup/common.sh@19 -- # local var val 00:05:06.713 19:25:53 -- setup/common.sh@20 -- # local mem_f mem 00:05:06.713 19:25:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.713 19:25:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.713 19:25:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.713 19:25:53 -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.713 19:25:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.713 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.714 19:25:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6791396 kB' 'MemAvailable: 9426500 kB' 'Buffers: 2684 kB' 'Cached: 2836920 kB' 'SwapCached: 0 kB' 'Active: 497904 kB' 'Inactive: 2459928 kB' 'Active(anon): 128720 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2459928 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119824 kB' 'Mapped: 50808 kB' 'Shmem: 10488 kB' 'KReclaimable: 85880 kB' 'Slab: 187192 kB' 'SReclaimable: 85880 kB' 'SUnreclaim: 101312 kB' 'KernelStack: 6544 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 322628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55240 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.714 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.714 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.715 19:25:53 -- setup/common.sh@33 -- # echo 1024 00:05:06.715 19:25:53 -- setup/common.sh@33 -- # return 0 00:05:06.715 19:25:53 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:06.715 19:25:53 -- setup/hugepages.sh@112 -- # get_nodes 00:05:06.715 19:25:53 -- setup/hugepages.sh@27 -- # local node 00:05:06.715 19:25:53 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:06.715 19:25:53 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:06.715 19:25:53 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:06.715 19:25:53 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:06.715 19:25:53 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:06.715 19:25:53 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:06.715 19:25:53 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:06.715 19:25:53 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.715 19:25:53 -- setup/common.sh@18 -- # local node=0 00:05:06.715 19:25:53 -- setup/common.sh@19 -- # local var val 00:05:06.715 19:25:53 -- setup/common.sh@20 -- # local mem_f mem 00:05:06.715 19:25:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.715 19:25:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:06.715 19:25:53 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:06.715 19:25:53 -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.715 19:25:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.715 19:25:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6791396 kB' 'MemUsed: 5447716 kB' 'SwapCached: 0 kB' 'Active: 497944 kB' 'Inactive: 2459928 kB' 'Active(anon): 128760 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2459928 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'FilePages: 2839604 kB' 'Mapped: 50808 kB' 'AnonPages: 119856 kB' 'Shmem: 10488 kB' 'KernelStack: 6560 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85880 kB' 'Slab: 187192 kB' 'SReclaimable: 85880 kB' 'SUnreclaim: 101312 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.715 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.715 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # continue 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.716 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.716 19:25:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.716 19:25:53 -- setup/common.sh@33 -- # echo 0 00:05:06.716 19:25:53 -- setup/common.sh@33 -- # return 0 00:05:06.716 19:25:53 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:06.716 19:25:53 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:06.716 19:25:53 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:06.716 19:25:53 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:06.716 node0=1024 expecting 1024 00:05:06.716 19:25:53 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:06.716 19:25:53 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:06.716 00:05:06.716 real 0m1.051s 00:05:06.716 user 0m0.519s 00:05:06.716 sys 0m0.476s 00:05:06.716 19:25:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:06.716 19:25:53 -- common/autotest_common.sh@10 -- # set +x 00:05:06.716 ************************************ 00:05:06.716 END TEST default_setup 00:05:06.716 ************************************ 00:05:06.716 19:25:53 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:06.716 19:25:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:06.716 19:25:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:06.716 19:25:53 -- common/autotest_common.sh@10 -- # set +x 00:05:06.716 ************************************ 00:05:06.716 START TEST per_node_1G_alloc 00:05:06.716 ************************************ 00:05:06.716 19:25:53 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:05:06.716 19:25:53 -- setup/hugepages.sh@143 -- # local IFS=, 00:05:06.716 19:25:53 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:06.716 19:25:53 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:06.716 19:25:53 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:06.716 19:25:53 -- setup/hugepages.sh@51 -- # shift 00:05:06.716 19:25:53 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:06.716 19:25:53 -- setup/hugepages.sh@52 -- # local node_ids 00:05:06.716 19:25:53 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:06.716 19:25:53 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:06.717 19:25:53 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:06.717 19:25:53 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:06.717 19:25:53 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:06.717 19:25:53 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:06.717 19:25:53 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:06.717 19:25:53 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:06.717 19:25:53 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:06.717 19:25:53 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:06.717 19:25:53 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:06.717 19:25:53 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:06.717 19:25:53 -- setup/hugepages.sh@73 -- # return 0 00:05:06.717 19:25:53 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:06.717 19:25:53 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:06.717 19:25:53 -- setup/hugepages.sh@146 -- # setup output 00:05:06.717 19:25:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.717 19:25:53 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:06.976 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:06.976 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:06.976 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:07.239 19:25:53 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:07.239 19:25:53 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:07.239 19:25:53 -- setup/hugepages.sh@89 -- # local node 00:05:07.239 19:25:53 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:07.239 19:25:53 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:07.239 19:25:53 -- setup/hugepages.sh@92 -- # local surp 00:05:07.239 19:25:53 -- setup/hugepages.sh@93 -- # local resv 00:05:07.239 19:25:53 -- setup/hugepages.sh@94 -- # local anon 00:05:07.239 19:25:53 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:07.239 19:25:53 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:07.239 19:25:53 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:07.239 19:25:53 -- setup/common.sh@18 -- # local node= 00:05:07.239 19:25:53 -- setup/common.sh@19 -- # local var val 00:05:07.239 19:25:53 -- setup/common.sh@20 -- # local mem_f mem 00:05:07.239 19:25:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.239 19:25:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.239 19:25:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.239 19:25:53 -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.239 19:25:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.239 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.239 19:25:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7848260 kB' 'MemAvailable: 10483368 kB' 'Buffers: 2684 kB' 'Cached: 2836920 kB' 'SwapCached: 0 kB' 'Active: 498120 kB' 'Inactive: 2459932 kB' 'Active(anon): 128936 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2459932 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 120072 kB' 'Mapped: 50936 kB' 'Shmem: 10488 kB' 'KReclaimable: 85880 kB' 'Slab: 187228 kB' 'SReclaimable: 85880 kB' 'SUnreclaim: 101348 kB' 'KernelStack: 6568 kB' 'PageTables: 4556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 322628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55320 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:05:07.239 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.239 19:25:53 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.239 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.239 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.239 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.239 19:25:53 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.239 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.239 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.239 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.239 19:25:53 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.239 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.239 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.239 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.239 19:25:53 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.239 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.239 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.239 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.239 19:25:53 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.239 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.239 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.239 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.239 19:25:53 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.239 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.239 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.239 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.239 19:25:53 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.239 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.239 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.239 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.239 19:25:53 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.239 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.239 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.239 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.239 19:25:53 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.239 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.239 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.239 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.240 19:25:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.240 19:25:53 -- setup/common.sh@33 -- # echo 0 00:05:07.240 19:25:53 -- setup/common.sh@33 -- # return 0 00:05:07.240 19:25:53 -- setup/hugepages.sh@97 -- # anon=0 00:05:07.240 19:25:53 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:07.240 19:25:53 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:07.240 19:25:53 -- setup/common.sh@18 -- # local node= 00:05:07.240 19:25:53 -- setup/common.sh@19 -- # local var val 00:05:07.240 19:25:53 -- setup/common.sh@20 -- # local mem_f mem 00:05:07.240 19:25:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.240 19:25:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.240 19:25:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.240 19:25:53 -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.240 19:25:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.240 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.241 19:25:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7848512 kB' 'MemAvailable: 10483620 kB' 'Buffers: 2684 kB' 'Cached: 2836920 kB' 'SwapCached: 0 kB' 'Active: 497960 kB' 'Inactive: 2459932 kB' 'Active(anon): 128776 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2459932 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119884 kB' 'Mapped: 50808 kB' 'Shmem: 10488 kB' 'KReclaimable: 85880 kB' 'Slab: 187224 kB' 'SReclaimable: 85880 kB' 'SUnreclaim: 101344 kB' 'KernelStack: 6560 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 322628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55304 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.241 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.241 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.242 19:25:53 -- setup/common.sh@33 -- # echo 0 00:05:07.242 19:25:53 -- setup/common.sh@33 -- # return 0 00:05:07.242 19:25:53 -- setup/hugepages.sh@99 -- # surp=0 00:05:07.242 19:25:53 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:07.242 19:25:53 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:07.242 19:25:53 -- setup/common.sh@18 -- # local node= 00:05:07.242 19:25:53 -- setup/common.sh@19 -- # local var val 00:05:07.242 19:25:53 -- setup/common.sh@20 -- # local mem_f mem 00:05:07.242 19:25:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.242 19:25:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.242 19:25:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.242 19:25:53 -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.242 19:25:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.242 19:25:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7848512 kB' 'MemAvailable: 10483620 kB' 'Buffers: 2684 kB' 'Cached: 2836920 kB' 'SwapCached: 0 kB' 'Active: 497916 kB' 'Inactive: 2459932 kB' 'Active(anon): 128732 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2459932 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119840 kB' 'Mapped: 50808 kB' 'Shmem: 10488 kB' 'KReclaimable: 85880 kB' 'Slab: 187224 kB' 'SReclaimable: 85880 kB' 'SUnreclaim: 101344 kB' 'KernelStack: 6560 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 322628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55304 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.242 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.242 19:25:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.243 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.243 19:25:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.244 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.244 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.244 19:25:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.244 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.244 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.244 19:25:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.244 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.244 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.244 19:25:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.244 19:25:53 -- setup/common.sh@33 -- # echo 0 00:05:07.244 19:25:53 -- setup/common.sh@33 -- # return 0 00:05:07.244 19:25:53 -- setup/hugepages.sh@100 -- # resv=0 00:05:07.244 nr_hugepages=512 00:05:07.244 19:25:53 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:07.244 resv_hugepages=0 00:05:07.244 19:25:53 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:07.244 surplus_hugepages=0 00:05:07.244 19:25:53 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:07.244 anon_hugepages=0 00:05:07.244 19:25:53 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:07.244 19:25:53 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:07.244 19:25:53 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:07.244 19:25:53 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:07.244 19:25:53 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:07.244 19:25:53 -- setup/common.sh@18 -- # local node= 00:05:07.244 19:25:53 -- setup/common.sh@19 -- # local var val 00:05:07.244 19:25:53 -- setup/common.sh@20 -- # local mem_f mem 00:05:07.244 19:25:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.244 19:25:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.244 19:25:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.244 19:25:53 -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.244 19:25:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.244 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.244 19:25:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7848512 kB' 'MemAvailable: 10483620 kB' 'Buffers: 2684 kB' 'Cached: 2836920 kB' 'SwapCached: 0 kB' 'Active: 497736 kB' 'Inactive: 2459932 kB' 'Active(anon): 128552 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2459932 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119656 kB' 'Mapped: 50808 kB' 'Shmem: 10488 kB' 'KReclaimable: 85880 kB' 'Slab: 187220 kB' 'SReclaimable: 85880 kB' 'SUnreclaim: 101340 kB' 'KernelStack: 6576 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 322628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55304 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:05:07.244 19:25:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.244 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.244 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.244 19:25:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.244 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.244 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.244 19:25:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.244 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.244 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.244 19:25:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.244 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.244 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.244 19:25:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.244 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.244 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.244 19:25:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.244 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.244 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.244 19:25:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.244 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.244 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.244 19:25:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.244 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.244 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.244 19:25:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.244 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.244 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.244 19:25:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.244 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.244 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.244 19:25:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.244 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.244 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.244 19:25:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.244 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.244 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.244 19:25:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.244 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.244 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.244 19:25:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.244 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.244 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.244 19:25:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.244 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.244 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.244 19:25:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.244 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.244 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.244 19:25:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.244 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.244 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.244 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 19:25:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.245 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.245 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 19:25:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.245 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.245 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 19:25:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.245 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.245 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 19:25:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.245 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.245 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 19:25:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.245 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.245 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 19:25:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.245 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.245 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 19:25:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.245 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.245 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 19:25:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.245 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.245 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 19:25:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.245 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.245 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 19:25:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.245 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.245 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 19:25:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.245 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.245 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 19:25:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.245 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.245 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 19:25:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.245 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.245 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 19:25:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.245 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.245 19:25:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 19:25:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 19:25:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.245 19:25:53 -- setup/common.sh@32 -- # continue 00:05:07.245 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 19:25:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.245 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.245 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 19:25:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.245 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.245 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 19:25:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.245 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.245 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 19:25:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.245 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.245 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 19:25:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.245 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.245 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 19:25:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.245 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.245 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 19:25:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.245 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.245 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 19:25:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.245 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.245 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 19:25:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.245 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.245 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 19:25:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.245 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.245 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 19:25:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.245 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.245 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 19:25:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.245 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.245 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 19:25:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.245 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.245 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 19:25:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.245 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.245 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 19:25:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.245 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.245 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 19:25:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.245 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.245 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.245 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.245 19:25:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.245 19:25:54 -- setup/common.sh@33 -- # echo 512 00:05:07.245 19:25:54 -- setup/common.sh@33 -- # return 0 00:05:07.245 19:25:54 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:07.245 19:25:54 -- setup/hugepages.sh@112 -- # get_nodes 00:05:07.245 19:25:54 -- setup/hugepages.sh@27 -- # local node 00:05:07.245 19:25:54 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:07.245 19:25:54 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:07.245 19:25:54 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:07.245 19:25:54 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:07.245 19:25:54 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:07.245 19:25:54 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:07.245 19:25:54 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:07.245 19:25:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:07.245 19:25:54 -- setup/common.sh@18 -- # local node=0 00:05:07.245 19:25:54 -- setup/common.sh@19 -- # local var val 00:05:07.245 19:25:54 -- setup/common.sh@20 -- # local mem_f mem 00:05:07.245 19:25:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.245 19:25:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:07.245 19:25:54 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:07.246 19:25:54 -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.246 19:25:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 19:25:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7848512 kB' 'MemUsed: 4390600 kB' 'SwapCached: 0 kB' 'Active: 497692 kB' 'Inactive: 2459932 kB' 'Active(anon): 128508 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2459932 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'FilePages: 2839604 kB' 'Mapped: 50808 kB' 'AnonPages: 119612 kB' 'Shmem: 10488 kB' 'KernelStack: 6560 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85880 kB' 'Slab: 187220 kB' 'SReclaimable: 85880 kB' 'SUnreclaim: 101340 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.246 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.246 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.247 19:25:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.247 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.247 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.247 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.247 19:25:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.247 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.247 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.247 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.247 19:25:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.247 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.247 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.247 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.247 19:25:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.247 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.247 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.247 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.247 19:25:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.247 19:25:54 -- setup/common.sh@33 -- # echo 0 00:05:07.247 19:25:54 -- setup/common.sh@33 -- # return 0 00:05:07.247 19:25:54 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:07.247 19:25:54 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:07.247 19:25:54 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:07.247 19:25:54 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:07.247 node0=512 expecting 512 00:05:07.247 19:25:54 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:07.247 19:25:54 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:07.247 00:05:07.247 real 0m0.544s 00:05:07.247 user 0m0.272s 00:05:07.247 sys 0m0.304s 00:05:07.247 19:25:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:07.247 19:25:54 -- common/autotest_common.sh@10 -- # set +x 00:05:07.247 ************************************ 00:05:07.247 END TEST per_node_1G_alloc 00:05:07.247 ************************************ 00:05:07.247 19:25:54 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:07.247 19:25:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:07.247 19:25:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.247 19:25:54 -- common/autotest_common.sh@10 -- # set +x 00:05:07.247 ************************************ 00:05:07.247 START TEST even_2G_alloc 00:05:07.247 ************************************ 00:05:07.247 19:25:54 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:05:07.247 19:25:54 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:07.247 19:25:54 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:07.247 19:25:54 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:07.247 19:25:54 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:07.247 19:25:54 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:07.247 19:25:54 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:07.247 19:25:54 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:07.247 19:25:54 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:07.247 19:25:54 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:07.247 19:25:54 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:07.247 19:25:54 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:07.247 19:25:54 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:07.247 19:25:54 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:07.247 19:25:54 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:07.247 19:25:54 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:07.247 19:25:54 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:07.247 19:25:54 -- setup/hugepages.sh@83 -- # : 0 00:05:07.247 19:25:54 -- setup/hugepages.sh@84 -- # : 0 00:05:07.247 19:25:54 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:07.247 19:25:54 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:07.247 19:25:54 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:07.247 19:25:54 -- setup/hugepages.sh@153 -- # setup output 00:05:07.247 19:25:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:07.247 19:25:54 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:07.819 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:07.819 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:07.819 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:07.819 19:25:54 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:07.819 19:25:54 -- setup/hugepages.sh@89 -- # local node 00:05:07.819 19:25:54 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:07.819 19:25:54 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:07.819 19:25:54 -- setup/hugepages.sh@92 -- # local surp 00:05:07.819 19:25:54 -- setup/hugepages.sh@93 -- # local resv 00:05:07.819 19:25:54 -- setup/hugepages.sh@94 -- # local anon 00:05:07.819 19:25:54 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:07.819 19:25:54 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:07.819 19:25:54 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:07.819 19:25:54 -- setup/common.sh@18 -- # local node= 00:05:07.819 19:25:54 -- setup/common.sh@19 -- # local var val 00:05:07.819 19:25:54 -- setup/common.sh@20 -- # local mem_f mem 00:05:07.819 19:25:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.819 19:25:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.819 19:25:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.819 19:25:54 -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.819 19:25:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.819 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.819 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.819 19:25:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6796396 kB' 'MemAvailable: 9431504 kB' 'Buffers: 2684 kB' 'Cached: 2836924 kB' 'SwapCached: 0 kB' 'Active: 498148 kB' 'Inactive: 2459936 kB' 'Active(anon): 128964 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2459936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 120048 kB' 'Mapped: 50900 kB' 'Shmem: 10488 kB' 'KReclaimable: 85868 kB' 'Slab: 187252 kB' 'SReclaimable: 85868 kB' 'SUnreclaim: 101384 kB' 'KernelStack: 6568 kB' 'PageTables: 4556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 322628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55336 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:05:07.819 19:25:54 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.819 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.819 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.819 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.819 19:25:54 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.819 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.819 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.819 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.819 19:25:54 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.819 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.819 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.819 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.819 19:25:54 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.819 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.819 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.819 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.819 19:25:54 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.819 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.819 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.819 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.819 19:25:54 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.819 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.819 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.819 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.819 19:25:54 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.819 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.819 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.819 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.819 19:25:54 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.819 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.819 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.819 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.819 19:25:54 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.819 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.819 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.820 19:25:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.820 19:25:54 -- setup/common.sh@33 -- # echo 0 00:05:07.820 19:25:54 -- setup/common.sh@33 -- # return 0 00:05:07.820 19:25:54 -- setup/hugepages.sh@97 -- # anon=0 00:05:07.820 19:25:54 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:07.820 19:25:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:07.820 19:25:54 -- setup/common.sh@18 -- # local node= 00:05:07.820 19:25:54 -- setup/common.sh@19 -- # local var val 00:05:07.820 19:25:54 -- setup/common.sh@20 -- # local mem_f mem 00:05:07.820 19:25:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.820 19:25:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.820 19:25:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.820 19:25:54 -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.820 19:25:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.820 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.821 19:25:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6796900 kB' 'MemAvailable: 9432008 kB' 'Buffers: 2684 kB' 'Cached: 2836924 kB' 'SwapCached: 0 kB' 'Active: 497980 kB' 'Inactive: 2459936 kB' 'Active(anon): 128796 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2459936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119932 kB' 'Mapped: 50900 kB' 'Shmem: 10488 kB' 'KReclaimable: 85868 kB' 'Slab: 187220 kB' 'SReclaimable: 85868 kB' 'SUnreclaim: 101352 kB' 'KernelStack: 6536 kB' 'PageTables: 4452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 322628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55320 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.821 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.821 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.822 19:25:54 -- setup/common.sh@33 -- # echo 0 00:05:07.822 19:25:54 -- setup/common.sh@33 -- # return 0 00:05:07.822 19:25:54 -- setup/hugepages.sh@99 -- # surp=0 00:05:07.822 19:25:54 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:07.822 19:25:54 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:07.822 19:25:54 -- setup/common.sh@18 -- # local node= 00:05:07.822 19:25:54 -- setup/common.sh@19 -- # local var val 00:05:07.822 19:25:54 -- setup/common.sh@20 -- # local mem_f mem 00:05:07.822 19:25:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.822 19:25:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.822 19:25:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.822 19:25:54 -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.822 19:25:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.822 19:25:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6796900 kB' 'MemAvailable: 9432008 kB' 'Buffers: 2684 kB' 'Cached: 2836924 kB' 'SwapCached: 0 kB' 'Active: 497744 kB' 'Inactive: 2459936 kB' 'Active(anon): 128560 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2459936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119648 kB' 'Mapped: 50900 kB' 'Shmem: 10488 kB' 'KReclaimable: 85868 kB' 'Slab: 187220 kB' 'SReclaimable: 85868 kB' 'SUnreclaim: 101352 kB' 'KernelStack: 6572 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 322628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55320 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.822 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.822 19:25:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.823 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.823 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.824 19:25:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.824 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.824 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.824 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.824 19:25:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.824 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.824 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.824 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.824 19:25:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.824 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.824 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.824 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.824 19:25:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.824 19:25:54 -- setup/common.sh@33 -- # echo 0 00:05:07.824 19:25:54 -- setup/common.sh@33 -- # return 0 00:05:07.824 19:25:54 -- setup/hugepages.sh@100 -- # resv=0 00:05:07.824 nr_hugepages=1024 00:05:07.824 19:25:54 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:07.824 resv_hugepages=0 00:05:07.824 19:25:54 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:07.824 surplus_hugepages=0 00:05:07.824 19:25:54 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:07.824 anon_hugepages=0 00:05:07.824 19:25:54 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:07.824 19:25:54 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:07.824 19:25:54 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:07.824 19:25:54 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:07.824 19:25:54 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:07.824 19:25:54 -- setup/common.sh@18 -- # local node= 00:05:07.824 19:25:54 -- setup/common.sh@19 -- # local var val 00:05:07.824 19:25:54 -- setup/common.sh@20 -- # local mem_f mem 00:05:07.824 19:25:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.824 19:25:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.824 19:25:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.824 19:25:54 -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.824 19:25:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.824 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.824 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.824 19:25:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6796900 kB' 'MemAvailable: 9432008 kB' 'Buffers: 2684 kB' 'Cached: 2836924 kB' 'SwapCached: 0 kB' 'Active: 498004 kB' 'Inactive: 2459936 kB' 'Active(anon): 128820 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2459936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119908 kB' 'Mapped: 50900 kB' 'Shmem: 10488 kB' 'KReclaimable: 85868 kB' 'Slab: 187220 kB' 'SReclaimable: 85868 kB' 'SUnreclaim: 101352 kB' 'KernelStack: 6572 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 322628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55320 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:05:07.824 19:25:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.824 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.824 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.824 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.824 19:25:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.824 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.824 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.824 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.824 19:25:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.824 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.824 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.824 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.824 19:25:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.824 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.824 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.824 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.824 19:25:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.824 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.824 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.824 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.824 19:25:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.824 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.824 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.824 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.824 19:25:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.824 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.824 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.824 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.824 19:25:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.824 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.824 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.824 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.824 19:25:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.824 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.824 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.824 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.824 19:25:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.824 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.824 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.824 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.824 19:25:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.824 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.824 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.824 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.824 19:25:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.824 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.824 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.824 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.824 19:25:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.824 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.824 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.824 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.824 19:25:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.824 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.824 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.824 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.824 19:25:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.824 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.824 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.824 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.824 19:25:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.824 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.824 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.824 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.825 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.825 19:25:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.825 19:25:54 -- setup/common.sh@33 -- # echo 1024 00:05:07.825 19:25:54 -- setup/common.sh@33 -- # return 0 00:05:07.825 19:25:54 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:07.825 19:25:54 -- setup/hugepages.sh@112 -- # get_nodes 00:05:07.825 19:25:54 -- setup/hugepages.sh@27 -- # local node 00:05:07.825 19:25:54 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:07.825 19:25:54 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:07.825 19:25:54 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:07.825 19:25:54 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:07.825 19:25:54 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:07.825 19:25:54 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:07.825 19:25:54 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:07.825 19:25:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:07.825 19:25:54 -- setup/common.sh@18 -- # local node=0 00:05:07.825 19:25:54 -- setup/common.sh@19 -- # local var val 00:05:07.826 19:25:54 -- setup/common.sh@20 -- # local mem_f mem 00:05:07.826 19:25:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.826 19:25:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:07.826 19:25:54 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:07.826 19:25:54 -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.826 19:25:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.826 19:25:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6797324 kB' 'MemUsed: 5441788 kB' 'SwapCached: 0 kB' 'Active: 497924 kB' 'Inactive: 2459936 kB' 'Active(anon): 128740 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2459936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'FilePages: 2839608 kB' 'Mapped: 50808 kB' 'AnonPages: 119820 kB' 'Shmem: 10488 kB' 'KernelStack: 6560 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85868 kB' 'Slab: 187220 kB' 'SReclaimable: 85868 kB' 'SUnreclaim: 101352 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.826 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.826 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.827 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.827 19:25:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.827 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.827 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.827 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.827 19:25:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.827 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.827 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.827 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.827 19:25:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.827 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.827 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.827 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.827 19:25:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.827 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.827 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.827 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.827 19:25:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.827 19:25:54 -- setup/common.sh@32 -- # continue 00:05:07.827 19:25:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.827 19:25:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.827 19:25:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.827 19:25:54 -- setup/common.sh@33 -- # echo 0 00:05:07.827 19:25:54 -- setup/common.sh@33 -- # return 0 00:05:07.827 19:25:54 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:07.827 19:25:54 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:07.827 19:25:54 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:07.827 19:25:54 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:07.827 node0=1024 expecting 1024 00:05:07.827 19:25:54 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:07.827 19:25:54 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:07.827 00:05:07.827 real 0m0.551s 00:05:07.827 user 0m0.264s 00:05:07.827 sys 0m0.323s 00:05:07.827 19:25:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:07.827 19:25:54 -- common/autotest_common.sh@10 -- # set +x 00:05:07.827 ************************************ 00:05:07.827 END TEST even_2G_alloc 00:05:07.827 ************************************ 00:05:07.827 19:25:54 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:07.827 19:25:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:07.827 19:25:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.827 19:25:54 -- common/autotest_common.sh@10 -- # set +x 00:05:07.827 ************************************ 00:05:07.827 START TEST odd_alloc 00:05:07.827 ************************************ 00:05:07.827 19:25:54 -- common/autotest_common.sh@1114 -- # odd_alloc 00:05:07.827 19:25:54 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:07.827 19:25:54 -- setup/hugepages.sh@49 -- # local size=2098176 00:05:07.827 19:25:54 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:07.827 19:25:54 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:07.827 19:25:54 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:07.827 19:25:54 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:07.827 19:25:54 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:07.827 19:25:54 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:07.827 19:25:54 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:07.827 19:25:54 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:07.827 19:25:54 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:07.827 19:25:54 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:07.827 19:25:54 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:07.827 19:25:54 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:07.827 19:25:54 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:07.827 19:25:54 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:07.827 19:25:54 -- setup/hugepages.sh@83 -- # : 0 00:05:07.827 19:25:54 -- setup/hugepages.sh@84 -- # : 0 00:05:07.827 19:25:54 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:07.827 19:25:54 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:07.827 19:25:54 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:07.827 19:25:54 -- setup/hugepages.sh@160 -- # setup output 00:05:07.827 19:25:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:07.827 19:25:54 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:08.396 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:08.396 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:08.396 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:08.396 19:25:55 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:08.396 19:25:55 -- setup/hugepages.sh@89 -- # local node 00:05:08.396 19:25:55 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:08.396 19:25:55 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:08.396 19:25:55 -- setup/hugepages.sh@92 -- # local surp 00:05:08.396 19:25:55 -- setup/hugepages.sh@93 -- # local resv 00:05:08.396 19:25:55 -- setup/hugepages.sh@94 -- # local anon 00:05:08.396 19:25:55 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:08.396 19:25:55 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:08.396 19:25:55 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:08.396 19:25:55 -- setup/common.sh@18 -- # local node= 00:05:08.396 19:25:55 -- setup/common.sh@19 -- # local var val 00:05:08.396 19:25:55 -- setup/common.sh@20 -- # local mem_f mem 00:05:08.396 19:25:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.396 19:25:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.396 19:25:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.396 19:25:55 -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.396 19:25:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.396 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.396 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.396 19:25:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6810196 kB' 'MemAvailable: 9445304 kB' 'Buffers: 2684 kB' 'Cached: 2836924 kB' 'SwapCached: 0 kB' 'Active: 498144 kB' 'Inactive: 2459936 kB' 'Active(anon): 128960 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2459936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120120 kB' 'Mapped: 50936 kB' 'Shmem: 10488 kB' 'KReclaimable: 85868 kB' 'Slab: 187216 kB' 'SReclaimable: 85868 kB' 'SUnreclaim: 101348 kB' 'KernelStack: 6600 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 322628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55304 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:05:08.396 19:25:55 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.396 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.396 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.396 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.396 19:25:55 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.396 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.396 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.396 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.396 19:25:55 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.396 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.396 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.396 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.396 19:25:55 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.396 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.396 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.396 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.396 19:25:55 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.396 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.396 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.396 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.396 19:25:55 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.396 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.396 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.396 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.396 19:25:55 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.396 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.396 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.396 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.396 19:25:55 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.396 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.396 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.396 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.396 19:25:55 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.396 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.396 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.396 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.396 19:25:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.396 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.396 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.396 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.396 19:25:55 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.396 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.396 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.396 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.396 19:25:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.396 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.396 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.396 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.396 19:25:55 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.396 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.396 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.396 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.396 19:25:55 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.396 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.396 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.396 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.396 19:25:55 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.396 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.396 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.396 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.396 19:25:55 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.396 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.396 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.396 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.396 19:25:55 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.396 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.396 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.396 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.396 19:25:55 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.396 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.396 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.396 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.396 19:25:55 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.396 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.396 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.396 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.396 19:25:55 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.396 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.396 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.396 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.397 19:25:55 -- setup/common.sh@33 -- # echo 0 00:05:08.397 19:25:55 -- setup/common.sh@33 -- # return 0 00:05:08.397 19:25:55 -- setup/hugepages.sh@97 -- # anon=0 00:05:08.397 19:25:55 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:08.397 19:25:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:08.397 19:25:55 -- setup/common.sh@18 -- # local node= 00:05:08.397 19:25:55 -- setup/common.sh@19 -- # local var val 00:05:08.397 19:25:55 -- setup/common.sh@20 -- # local mem_f mem 00:05:08.397 19:25:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.397 19:25:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.397 19:25:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.397 19:25:55 -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.397 19:25:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.397 19:25:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6809948 kB' 'MemAvailable: 9445056 kB' 'Buffers: 2684 kB' 'Cached: 2836924 kB' 'SwapCached: 0 kB' 'Active: 497900 kB' 'Inactive: 2459936 kB' 'Active(anon): 128716 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2459936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119828 kB' 'Mapped: 50936 kB' 'Shmem: 10488 kB' 'KReclaimable: 85868 kB' 'Slab: 187220 kB' 'SReclaimable: 85868 kB' 'SUnreclaim: 101352 kB' 'KernelStack: 6568 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 322628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55304 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.397 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.397 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.398 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.398 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.399 19:25:55 -- setup/common.sh@33 -- # echo 0 00:05:08.399 19:25:55 -- setup/common.sh@33 -- # return 0 00:05:08.399 19:25:55 -- setup/hugepages.sh@99 -- # surp=0 00:05:08.399 19:25:55 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:08.399 19:25:55 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:08.399 19:25:55 -- setup/common.sh@18 -- # local node= 00:05:08.399 19:25:55 -- setup/common.sh@19 -- # local var val 00:05:08.399 19:25:55 -- setup/common.sh@20 -- # local mem_f mem 00:05:08.399 19:25:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.399 19:25:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.399 19:25:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.399 19:25:55 -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.399 19:25:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.399 19:25:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6809948 kB' 'MemAvailable: 9445056 kB' 'Buffers: 2684 kB' 'Cached: 2836924 kB' 'SwapCached: 0 kB' 'Active: 498028 kB' 'Inactive: 2459936 kB' 'Active(anon): 128844 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2459936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119960 kB' 'Mapped: 50860 kB' 'Shmem: 10488 kB' 'KReclaimable: 85868 kB' 'Slab: 187220 kB' 'SReclaimable: 85868 kB' 'SUnreclaim: 101352 kB' 'KernelStack: 6520 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 322628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55304 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.399 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.399 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.400 19:25:55 -- setup/common.sh@33 -- # echo 0 00:05:08.400 19:25:55 -- setup/common.sh@33 -- # return 0 00:05:08.400 19:25:55 -- setup/hugepages.sh@100 -- # resv=0 00:05:08.400 nr_hugepages=1025 00:05:08.400 19:25:55 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:08.400 resv_hugepages=0 00:05:08.400 19:25:55 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:08.400 19:25:55 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:08.400 surplus_hugepages=0 00:05:08.400 anon_hugepages=0 00:05:08.400 19:25:55 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:08.400 19:25:55 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:08.400 19:25:55 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:08.400 19:25:55 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:08.400 19:25:55 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:08.400 19:25:55 -- setup/common.sh@18 -- # local node= 00:05:08.400 19:25:55 -- setup/common.sh@19 -- # local var val 00:05:08.400 19:25:55 -- setup/common.sh@20 -- # local mem_f mem 00:05:08.400 19:25:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.400 19:25:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.400 19:25:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.400 19:25:55 -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.400 19:25:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.400 19:25:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6809196 kB' 'MemAvailable: 9444304 kB' 'Buffers: 2684 kB' 'Cached: 2836924 kB' 'SwapCached: 0 kB' 'Active: 497892 kB' 'Inactive: 2459936 kB' 'Active(anon): 128708 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2459936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119832 kB' 'Mapped: 50808 kB' 'Shmem: 10488 kB' 'KReclaimable: 85868 kB' 'Slab: 187220 kB' 'SReclaimable: 85868 kB' 'SUnreclaim: 101352 kB' 'KernelStack: 6528 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 322628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55304 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.400 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.400 19:25:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.401 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.401 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.402 19:25:55 -- setup/common.sh@33 -- # echo 1025 00:05:08.402 19:25:55 -- setup/common.sh@33 -- # return 0 00:05:08.402 19:25:55 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:08.402 19:25:55 -- setup/hugepages.sh@112 -- # get_nodes 00:05:08.402 19:25:55 -- setup/hugepages.sh@27 -- # local node 00:05:08.402 19:25:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:08.402 19:25:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:08.402 19:25:55 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:08.402 19:25:55 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:08.402 19:25:55 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:08.402 19:25:55 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:08.402 19:25:55 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:08.402 19:25:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:08.402 19:25:55 -- setup/common.sh@18 -- # local node=0 00:05:08.402 19:25:55 -- setup/common.sh@19 -- # local var val 00:05:08.402 19:25:55 -- setup/common.sh@20 -- # local mem_f mem 00:05:08.402 19:25:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.402 19:25:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:08.402 19:25:55 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:08.402 19:25:55 -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.402 19:25:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.402 19:25:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6809196 kB' 'MemUsed: 5429916 kB' 'SwapCached: 0 kB' 'Active: 497960 kB' 'Inactive: 2459936 kB' 'Active(anon): 128776 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2459936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2839608 kB' 'Mapped: 50808 kB' 'AnonPages: 119936 kB' 'Shmem: 10488 kB' 'KernelStack: 6576 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85868 kB' 'Slab: 187216 kB' 'SReclaimable: 85868 kB' 'SUnreclaim: 101348 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.402 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.402 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.403 19:25:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.403 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.403 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.403 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.403 19:25:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.403 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.403 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.403 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.403 19:25:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.403 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.403 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.403 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.403 19:25:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.403 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.403 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.403 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.403 19:25:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.403 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.403 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.403 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.403 19:25:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.403 19:25:55 -- setup/common.sh@33 -- # echo 0 00:05:08.403 19:25:55 -- setup/common.sh@33 -- # return 0 00:05:08.403 19:25:55 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:08.403 19:25:55 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:08.403 19:25:55 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:08.403 19:25:55 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:08.403 node0=1025 expecting 1025 00:05:08.403 19:25:55 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:08.403 19:25:55 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:08.403 00:05:08.403 real 0m0.549s 00:05:08.403 user 0m0.259s 00:05:08.403 sys 0m0.325s 00:05:08.403 19:25:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:08.403 19:25:55 -- common/autotest_common.sh@10 -- # set +x 00:05:08.403 ************************************ 00:05:08.403 END TEST odd_alloc 00:05:08.403 ************************************ 00:05:08.403 19:25:55 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:08.403 19:25:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:08.403 19:25:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.403 19:25:55 -- common/autotest_common.sh@10 -- # set +x 00:05:08.661 ************************************ 00:05:08.661 START TEST custom_alloc 00:05:08.661 ************************************ 00:05:08.661 19:25:55 -- common/autotest_common.sh@1114 -- # custom_alloc 00:05:08.661 19:25:55 -- setup/hugepages.sh@167 -- # local IFS=, 00:05:08.661 19:25:55 -- setup/hugepages.sh@169 -- # local node 00:05:08.661 19:25:55 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:08.661 19:25:55 -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:08.661 19:25:55 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:08.661 19:25:55 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:08.661 19:25:55 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:08.661 19:25:55 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:08.661 19:25:55 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:08.661 19:25:55 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:08.661 19:25:55 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:08.661 19:25:55 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:08.661 19:25:55 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:08.661 19:25:55 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:08.661 19:25:55 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:08.661 19:25:55 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:08.661 19:25:55 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:08.661 19:25:55 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:08.661 19:25:55 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:08.661 19:25:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:08.661 19:25:55 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:08.662 19:25:55 -- setup/hugepages.sh@83 -- # : 0 00:05:08.662 19:25:55 -- setup/hugepages.sh@84 -- # : 0 00:05:08.662 19:25:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:08.662 19:25:55 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:08.662 19:25:55 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:08.662 19:25:55 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:08.662 19:25:55 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:08.662 19:25:55 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:08.662 19:25:55 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:08.662 19:25:55 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:08.662 19:25:55 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:08.662 19:25:55 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:08.662 19:25:55 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:08.662 19:25:55 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:08.662 19:25:55 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:08.662 19:25:55 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:08.662 19:25:55 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:08.662 19:25:55 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:08.662 19:25:55 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:08.662 19:25:55 -- setup/hugepages.sh@78 -- # return 0 00:05:08.662 19:25:55 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:08.662 19:25:55 -- setup/hugepages.sh@187 -- # setup output 00:05:08.662 19:25:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:08.662 19:25:55 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:08.924 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:08.924 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:08.924 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:08.924 19:25:55 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:08.924 19:25:55 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:08.924 19:25:55 -- setup/hugepages.sh@89 -- # local node 00:05:08.924 19:25:55 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:08.924 19:25:55 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:08.924 19:25:55 -- setup/hugepages.sh@92 -- # local surp 00:05:08.924 19:25:55 -- setup/hugepages.sh@93 -- # local resv 00:05:08.924 19:25:55 -- setup/hugepages.sh@94 -- # local anon 00:05:08.924 19:25:55 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:08.924 19:25:55 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:08.924 19:25:55 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:08.924 19:25:55 -- setup/common.sh@18 -- # local node= 00:05:08.924 19:25:55 -- setup/common.sh@19 -- # local var val 00:05:08.924 19:25:55 -- setup/common.sh@20 -- # local mem_f mem 00:05:08.924 19:25:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.924 19:25:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.924 19:25:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.924 19:25:55 -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.924 19:25:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.924 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 19:25:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7859060 kB' 'MemAvailable: 10494168 kB' 'Buffers: 2684 kB' 'Cached: 2836924 kB' 'SwapCached: 0 kB' 'Active: 498160 kB' 'Inactive: 2459936 kB' 'Active(anon): 128976 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2459936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120148 kB' 'Mapped: 51232 kB' 'Shmem: 10488 kB' 'KReclaimable: 85868 kB' 'Slab: 187216 kB' 'SReclaimable: 85868 kB' 'SUnreclaim: 101348 kB' 'KernelStack: 6600 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 322628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55304 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.925 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.925 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.926 19:25:55 -- setup/common.sh@33 -- # echo 0 00:05:08.926 19:25:55 -- setup/common.sh@33 -- # return 0 00:05:08.926 19:25:55 -- setup/hugepages.sh@97 -- # anon=0 00:05:08.926 19:25:55 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:08.926 19:25:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:08.926 19:25:55 -- setup/common.sh@18 -- # local node= 00:05:08.926 19:25:55 -- setup/common.sh@19 -- # local var val 00:05:08.926 19:25:55 -- setup/common.sh@20 -- # local mem_f mem 00:05:08.926 19:25:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.926 19:25:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.926 19:25:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.926 19:25:55 -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.926 19:25:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 19:25:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7859060 kB' 'MemAvailable: 10494168 kB' 'Buffers: 2684 kB' 'Cached: 2836924 kB' 'SwapCached: 0 kB' 'Active: 498192 kB' 'Inactive: 2459936 kB' 'Active(anon): 129008 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2459936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120144 kB' 'Mapped: 51232 kB' 'Shmem: 10488 kB' 'KReclaimable: 85868 kB' 'Slab: 187212 kB' 'SReclaimable: 85868 kB' 'SUnreclaim: 101344 kB' 'KernelStack: 6600 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 322628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55288 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.926 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.926 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.927 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.927 19:25:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.927 19:25:55 -- setup/common.sh@33 -- # echo 0 00:05:08.927 19:25:55 -- setup/common.sh@33 -- # return 0 00:05:08.927 19:25:55 -- setup/hugepages.sh@99 -- # surp=0 00:05:08.927 19:25:55 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:08.927 19:25:55 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:08.927 19:25:55 -- setup/common.sh@18 -- # local node= 00:05:08.928 19:25:55 -- setup/common.sh@19 -- # local var val 00:05:08.928 19:25:55 -- setup/common.sh@20 -- # local mem_f mem 00:05:08.928 19:25:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.928 19:25:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.928 19:25:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.928 19:25:55 -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.928 19:25:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.928 19:25:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7859060 kB' 'MemAvailable: 10494168 kB' 'Buffers: 2684 kB' 'Cached: 2836924 kB' 'SwapCached: 0 kB' 'Active: 498024 kB' 'Inactive: 2459936 kB' 'Active(anon): 128840 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2459936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119960 kB' 'Mapped: 50808 kB' 'Shmem: 10488 kB' 'KReclaimable: 85868 kB' 'Slab: 187216 kB' 'SReclaimable: 85868 kB' 'SUnreclaim: 101348 kB' 'KernelStack: 6576 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 322628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55288 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.928 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.928 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.929 19:25:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.929 19:25:55 -- setup/common.sh@33 -- # echo 0 00:05:08.929 19:25:55 -- setup/common.sh@33 -- # return 0 00:05:08.929 19:25:55 -- setup/hugepages.sh@100 -- # resv=0 00:05:08.929 nr_hugepages=512 00:05:08.929 19:25:55 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:08.929 resv_hugepages=0 00:05:08.929 19:25:55 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:08.929 surplus_hugepages=0 00:05:08.929 19:25:55 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:08.929 anon_hugepages=0 00:05:08.929 19:25:55 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:08.929 19:25:55 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:08.929 19:25:55 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:08.929 19:25:55 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:08.929 19:25:55 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:08.929 19:25:55 -- setup/common.sh@18 -- # local node= 00:05:08.929 19:25:55 -- setup/common.sh@19 -- # local var val 00:05:08.929 19:25:55 -- setup/common.sh@20 -- # local mem_f mem 00:05:08.929 19:25:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.929 19:25:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.929 19:25:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.929 19:25:55 -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.929 19:25:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.929 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.930 19:25:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7859060 kB' 'MemAvailable: 10494168 kB' 'Buffers: 2684 kB' 'Cached: 2836924 kB' 'SwapCached: 0 kB' 'Active: 497924 kB' 'Inactive: 2459936 kB' 'Active(anon): 128740 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2459936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119840 kB' 'Mapped: 50808 kB' 'Shmem: 10488 kB' 'KReclaimable: 85868 kB' 'Slab: 187212 kB' 'SReclaimable: 85868 kB' 'SUnreclaim: 101344 kB' 'KernelStack: 6560 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 322628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55288 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.930 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.930 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.930 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.930 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.930 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.930 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.930 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.930 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.930 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.930 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.930 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.930 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.930 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.930 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.930 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.930 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.930 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.930 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.930 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.930 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.930 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.930 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.930 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.930 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.930 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.930 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.930 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.930 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.930 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.930 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.930 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.930 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.930 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.930 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.930 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.930 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.930 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.930 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.930 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.930 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.930 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.930 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # continue 00:05:08.930 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.930 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.930 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.191 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.191 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.191 19:25:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.191 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.191 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.191 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.191 19:25:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.191 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.191 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.191 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.191 19:25:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.191 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.191 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.191 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.191 19:25:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.191 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.191 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.191 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.191 19:25:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.192 19:25:55 -- setup/common.sh@33 -- # echo 512 00:05:09.192 19:25:55 -- setup/common.sh@33 -- # return 0 00:05:09.192 19:25:55 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:09.192 19:25:55 -- setup/hugepages.sh@112 -- # get_nodes 00:05:09.192 19:25:55 -- setup/hugepages.sh@27 -- # local node 00:05:09.192 19:25:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:09.192 19:25:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:09.192 19:25:55 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:09.192 19:25:55 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:09.192 19:25:55 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:09.192 19:25:55 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:09.192 19:25:55 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:09.192 19:25:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.192 19:25:55 -- setup/common.sh@18 -- # local node=0 00:05:09.192 19:25:55 -- setup/common.sh@19 -- # local var val 00:05:09.192 19:25:55 -- setup/common.sh@20 -- # local mem_f mem 00:05:09.192 19:25:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.192 19:25:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:09.192 19:25:55 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:09.192 19:25:55 -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.192 19:25:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.192 19:25:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7859792 kB' 'MemUsed: 4379320 kB' 'SwapCached: 0 kB' 'Active: 497780 kB' 'Inactive: 2459936 kB' 'Active(anon): 128596 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2459936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2839608 kB' 'Mapped: 50808 kB' 'AnonPages: 119736 kB' 'Shmem: 10488 kB' 'KernelStack: 6592 kB' 'PageTables: 4524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85868 kB' 'Slab: 187208 kB' 'SReclaimable: 85868 kB' 'SUnreclaim: 101340 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.192 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.192 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # continue 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.193 19:25:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.193 19:25:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.193 19:25:55 -- setup/common.sh@33 -- # echo 0 00:05:09.194 19:25:55 -- setup/common.sh@33 -- # return 0 00:05:09.194 19:25:55 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:09.194 19:25:55 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:09.194 19:25:55 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:09.194 19:25:55 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:09.194 node0=512 expecting 512 00:05:09.194 19:25:55 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:09.194 19:25:55 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:09.194 00:05:09.194 real 0m0.558s 00:05:09.194 user 0m0.264s 00:05:09.194 sys 0m0.330s 00:05:09.194 19:25:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:09.194 19:25:55 -- common/autotest_common.sh@10 -- # set +x 00:05:09.194 ************************************ 00:05:09.194 END TEST custom_alloc 00:05:09.194 ************************************ 00:05:09.194 19:25:55 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:09.194 19:25:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:09.194 19:25:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.194 19:25:55 -- common/autotest_common.sh@10 -- # set +x 00:05:09.194 ************************************ 00:05:09.194 START TEST no_shrink_alloc 00:05:09.194 ************************************ 00:05:09.194 19:25:55 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:05:09.194 19:25:55 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:09.194 19:25:55 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:09.194 19:25:55 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:09.194 19:25:55 -- setup/hugepages.sh@51 -- # shift 00:05:09.194 19:25:55 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:09.194 19:25:55 -- setup/hugepages.sh@52 -- # local node_ids 00:05:09.194 19:25:55 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:09.194 19:25:55 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:09.194 19:25:55 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:09.194 19:25:55 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:09.194 19:25:55 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:09.194 19:25:55 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:09.194 19:25:55 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:09.194 19:25:55 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:09.194 19:25:55 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:09.194 19:25:55 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:09.194 19:25:55 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:09.194 19:25:55 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:09.194 19:25:55 -- setup/hugepages.sh@73 -- # return 0 00:05:09.194 19:25:55 -- setup/hugepages.sh@198 -- # setup output 00:05:09.194 19:25:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:09.194 19:25:55 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:09.454 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:09.454 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:09.454 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:09.454 19:25:56 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:09.454 19:25:56 -- setup/hugepages.sh@89 -- # local node 00:05:09.454 19:25:56 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:09.454 19:25:56 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:09.454 19:25:56 -- setup/hugepages.sh@92 -- # local surp 00:05:09.454 19:25:56 -- setup/hugepages.sh@93 -- # local resv 00:05:09.454 19:25:56 -- setup/hugepages.sh@94 -- # local anon 00:05:09.454 19:25:56 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:09.454 19:25:56 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:09.454 19:25:56 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:09.454 19:25:56 -- setup/common.sh@18 -- # local node= 00:05:09.454 19:25:56 -- setup/common.sh@19 -- # local var val 00:05:09.454 19:25:56 -- setup/common.sh@20 -- # local mem_f mem 00:05:09.454 19:25:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.454 19:25:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.454 19:25:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.454 19:25:56 -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.454 19:25:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.454 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.454 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.455 19:25:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6808572 kB' 'MemAvailable: 9443680 kB' 'Buffers: 2684 kB' 'Cached: 2836924 kB' 'SwapCached: 0 kB' 'Active: 498048 kB' 'Inactive: 2459936 kB' 'Active(anon): 128864 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2459936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119712 kB' 'Mapped: 50932 kB' 'Shmem: 10488 kB' 'KReclaimable: 85868 kB' 'Slab: 187220 kB' 'SReclaimable: 85868 kB' 'SUnreclaim: 101352 kB' 'KernelStack: 6552 kB' 'PageTables: 4496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 322828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55288 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.455 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.455 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.456 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.456 19:25:56 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.456 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.456 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.456 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.456 19:25:56 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.456 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.456 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.456 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.456 19:25:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.456 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.456 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.456 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.456 19:25:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.456 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.456 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.456 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.456 19:25:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.456 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.456 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.456 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.456 19:25:56 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.456 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.456 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.456 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.456 19:25:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.456 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.456 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.456 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.456 19:25:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.456 19:25:56 -- setup/common.sh@33 -- # echo 0 00:05:09.456 19:25:56 -- setup/common.sh@33 -- # return 0 00:05:09.456 19:25:56 -- setup/hugepages.sh@97 -- # anon=0 00:05:09.456 19:25:56 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:09.456 19:25:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.456 19:25:56 -- setup/common.sh@18 -- # local node= 00:05:09.456 19:25:56 -- setup/common.sh@19 -- # local var val 00:05:09.456 19:25:56 -- setup/common.sh@20 -- # local mem_f mem 00:05:09.456 19:25:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.456 19:25:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.456 19:25:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.456 19:25:56 -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.456 19:25:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.456 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.456 19:25:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6808572 kB' 'MemAvailable: 9443680 kB' 'Buffers: 2684 kB' 'Cached: 2836924 kB' 'SwapCached: 0 kB' 'Active: 497948 kB' 'Inactive: 2459936 kB' 'Active(anon): 128764 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2459936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119908 kB' 'Mapped: 50932 kB' 'Shmem: 10488 kB' 'KReclaimable: 85868 kB' 'Slab: 187220 kB' 'SReclaimable: 85868 kB' 'SUnreclaim: 101352 kB' 'KernelStack: 6536 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 322828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55288 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:05:09.456 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.456 19:25:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.456 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.456 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.456 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.456 19:25:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.456 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.456 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.844 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.844 19:25:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.844 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.844 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.844 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.844 19:25:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.844 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.844 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.844 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.844 19:25:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.844 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.844 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.844 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.844 19:25:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.844 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.844 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.844 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.844 19:25:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.844 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.844 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.844 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.844 19:25:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.844 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.844 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.844 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.844 19:25:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.844 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.844 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.844 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.845 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.845 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.846 19:25:56 -- setup/common.sh@33 -- # echo 0 00:05:09.846 19:25:56 -- setup/common.sh@33 -- # return 0 00:05:09.846 19:25:56 -- setup/hugepages.sh@99 -- # surp=0 00:05:09.846 19:25:56 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:09.846 19:25:56 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:09.846 19:25:56 -- setup/common.sh@18 -- # local node= 00:05:09.846 19:25:56 -- setup/common.sh@19 -- # local var val 00:05:09.846 19:25:56 -- setup/common.sh@20 -- # local mem_f mem 00:05:09.846 19:25:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.846 19:25:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.846 19:25:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.846 19:25:56 -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.846 19:25:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.846 19:25:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6808572 kB' 'MemAvailable: 9443680 kB' 'Buffers: 2684 kB' 'Cached: 2836924 kB' 'SwapCached: 0 kB' 'Active: 498020 kB' 'Inactive: 2459936 kB' 'Active(anon): 128836 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2459936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119988 kB' 'Mapped: 50808 kB' 'Shmem: 10488 kB' 'KReclaimable: 85868 kB' 'Slab: 187216 kB' 'SReclaimable: 85868 kB' 'SUnreclaim: 101348 kB' 'KernelStack: 6576 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 322828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55304 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.846 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.846 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.847 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.847 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.848 19:25:56 -- setup/common.sh@33 -- # echo 0 00:05:09.848 19:25:56 -- setup/common.sh@33 -- # return 0 00:05:09.848 19:25:56 -- setup/hugepages.sh@100 -- # resv=0 00:05:09.848 nr_hugepages=1024 00:05:09.848 19:25:56 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:09.848 resv_hugepages=0 00:05:09.848 19:25:56 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:09.848 surplus_hugepages=0 00:05:09.848 19:25:56 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:09.848 anon_hugepages=0 00:05:09.848 19:25:56 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:09.848 19:25:56 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:09.848 19:25:56 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:09.848 19:25:56 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:09.848 19:25:56 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:09.848 19:25:56 -- setup/common.sh@18 -- # local node= 00:05:09.848 19:25:56 -- setup/common.sh@19 -- # local var val 00:05:09.848 19:25:56 -- setup/common.sh@20 -- # local mem_f mem 00:05:09.848 19:25:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.848 19:25:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.848 19:25:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.848 19:25:56 -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.848 19:25:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.848 19:25:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6808572 kB' 'MemAvailable: 9443680 kB' 'Buffers: 2684 kB' 'Cached: 2836924 kB' 'SwapCached: 0 kB' 'Active: 498024 kB' 'Inactive: 2459936 kB' 'Active(anon): 128840 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2459936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119988 kB' 'Mapped: 50808 kB' 'Shmem: 10488 kB' 'KReclaimable: 85868 kB' 'Slab: 187216 kB' 'SReclaimable: 85868 kB' 'SUnreclaim: 101348 kB' 'KernelStack: 6576 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 322828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55304 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.848 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.848 19:25:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.849 19:25:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.849 19:25:56 -- setup/common.sh@33 -- # echo 1024 00:05:09.849 19:25:56 -- setup/common.sh@33 -- # return 0 00:05:09.849 19:25:56 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:09.849 19:25:56 -- setup/hugepages.sh@112 -- # get_nodes 00:05:09.849 19:25:56 -- setup/hugepages.sh@27 -- # local node 00:05:09.849 19:25:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:09.849 19:25:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:09.849 19:25:56 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:09.849 19:25:56 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:09.849 19:25:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:09.849 19:25:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:09.849 19:25:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:09.849 19:25:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.849 19:25:56 -- setup/common.sh@18 -- # local node=0 00:05:09.849 19:25:56 -- setup/common.sh@19 -- # local var val 00:05:09.849 19:25:56 -- setup/common.sh@20 -- # local mem_f mem 00:05:09.849 19:25:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.849 19:25:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:09.849 19:25:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:09.849 19:25:56 -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.849 19:25:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.849 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.850 19:25:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6809352 kB' 'MemUsed: 5429760 kB' 'SwapCached: 0 kB' 'Active: 496720 kB' 'Inactive: 2459936 kB' 'Active(anon): 127536 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2459936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2839608 kB' 'Mapped: 50288 kB' 'AnonPages: 118616 kB' 'Shmem: 10488 kB' 'KernelStack: 6560 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85868 kB' 'Slab: 187212 kB' 'SReclaimable: 85868 kB' 'SUnreclaim: 101344 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.850 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.850 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.851 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.851 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.851 19:25:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.851 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.851 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.851 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.851 19:25:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.851 19:25:56 -- setup/common.sh@32 -- # continue 00:05:09.851 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.851 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.851 19:25:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.851 19:25:56 -- setup/common.sh@33 -- # echo 0 00:05:09.851 19:25:56 -- setup/common.sh@33 -- # return 0 00:05:09.851 19:25:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:09.851 19:25:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:09.851 19:25:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:09.851 19:25:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:09.851 node0=1024 expecting 1024 00:05:09.851 19:25:56 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:09.851 19:25:56 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:09.851 19:25:56 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:09.851 19:25:56 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:09.851 19:25:56 -- setup/hugepages.sh@202 -- # setup output 00:05:09.851 19:25:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:09.851 19:25:56 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:10.113 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:10.113 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:10.113 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:10.113 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:10.113 19:25:56 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:10.113 19:25:56 -- setup/hugepages.sh@89 -- # local node 00:05:10.113 19:25:56 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:10.113 19:25:56 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:10.113 19:25:56 -- setup/hugepages.sh@92 -- # local surp 00:05:10.113 19:25:56 -- setup/hugepages.sh@93 -- # local resv 00:05:10.113 19:25:56 -- setup/hugepages.sh@94 -- # local anon 00:05:10.113 19:25:56 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:10.113 19:25:56 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:10.113 19:25:56 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:10.113 19:25:56 -- setup/common.sh@18 -- # local node= 00:05:10.113 19:25:56 -- setup/common.sh@19 -- # local var val 00:05:10.113 19:25:56 -- setup/common.sh@20 -- # local mem_f mem 00:05:10.113 19:25:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.113 19:25:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.113 19:25:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.113 19:25:56 -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.113 19:25:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.113 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.113 19:25:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6810052 kB' 'MemAvailable: 9445152 kB' 'Buffers: 2684 kB' 'Cached: 2836924 kB' 'SwapCached: 0 kB' 'Active: 495780 kB' 'Inactive: 2459936 kB' 'Active(anon): 126596 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2459936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117756 kB' 'Mapped: 50100 kB' 'Shmem: 10488 kB' 'KReclaimable: 85856 kB' 'Slab: 186916 kB' 'SReclaimable: 85856 kB' 'SUnreclaim: 101060 kB' 'KernelStack: 6504 kB' 'PageTables: 3952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 304936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55224 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:05:10.113 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.113 19:25:56 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.113 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.113 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.113 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.113 19:25:56 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.113 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.113 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.113 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.113 19:25:56 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.113 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.113 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.113 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.113 19:25:56 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.113 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.113 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.113 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.113 19:25:56 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.113 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.113 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.113 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.113 19:25:56 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.113 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.113 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.113 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.113 19:25:56 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.113 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.113 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.113 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.113 19:25:56 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.113 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.113 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.113 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.113 19:25:56 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.113 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.113 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.113 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.113 19:25:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.113 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.113 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.113 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.113 19:25:56 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.113 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.113 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.113 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.113 19:25:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.113 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.113 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.113 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.113 19:25:56 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.113 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.113 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.113 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.113 19:25:56 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.113 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.113 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.113 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.113 19:25:56 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.113 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.113 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.113 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.113 19:25:56 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.113 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.113 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.113 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.113 19:25:56 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.113 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.113 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.113 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.113 19:25:56 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.113 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.113 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.113 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.113 19:25:56 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.113 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.113 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.113 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.114 19:25:56 -- setup/common.sh@33 -- # echo 0 00:05:10.114 19:25:56 -- setup/common.sh@33 -- # return 0 00:05:10.114 19:25:56 -- setup/hugepages.sh@97 -- # anon=0 00:05:10.114 19:25:56 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:10.114 19:25:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:10.114 19:25:56 -- setup/common.sh@18 -- # local node= 00:05:10.114 19:25:56 -- setup/common.sh@19 -- # local var val 00:05:10.114 19:25:56 -- setup/common.sh@20 -- # local mem_f mem 00:05:10.114 19:25:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.114 19:25:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.114 19:25:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.114 19:25:56 -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.114 19:25:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.114 19:25:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6810100 kB' 'MemAvailable: 9445200 kB' 'Buffers: 2684 kB' 'Cached: 2836924 kB' 'SwapCached: 0 kB' 'Active: 495148 kB' 'Inactive: 2459936 kB' 'Active(anon): 125964 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2459936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117052 kB' 'Mapped: 49960 kB' 'Shmem: 10488 kB' 'KReclaimable: 85856 kB' 'Slab: 186920 kB' 'SReclaimable: 85856 kB' 'SUnreclaim: 101064 kB' 'KernelStack: 6448 kB' 'PageTables: 3872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 304936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.114 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.114 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.115 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.115 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.116 19:25:56 -- setup/common.sh@33 -- # echo 0 00:05:10.116 19:25:56 -- setup/common.sh@33 -- # return 0 00:05:10.116 19:25:56 -- setup/hugepages.sh@99 -- # surp=0 00:05:10.116 19:25:56 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:10.116 19:25:56 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:10.116 19:25:56 -- setup/common.sh@18 -- # local node= 00:05:10.116 19:25:56 -- setup/common.sh@19 -- # local var val 00:05:10.116 19:25:56 -- setup/common.sh@20 -- # local mem_f mem 00:05:10.116 19:25:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.116 19:25:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.116 19:25:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.116 19:25:56 -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.116 19:25:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.116 19:25:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6810100 kB' 'MemAvailable: 9445200 kB' 'Buffers: 2684 kB' 'Cached: 2836924 kB' 'SwapCached: 0 kB' 'Active: 495080 kB' 'Inactive: 2459936 kB' 'Active(anon): 125896 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2459936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116972 kB' 'Mapped: 49960 kB' 'Shmem: 10488 kB' 'KReclaimable: 85856 kB' 'Slab: 186916 kB' 'SReclaimable: 85856 kB' 'SUnreclaim: 101060 kB' 'KernelStack: 6448 kB' 'PageTables: 3872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 304936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.116 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.116 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.117 19:25:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.117 19:25:56 -- setup/common.sh@33 -- # echo 0 00:05:10.117 19:25:56 -- setup/common.sh@33 -- # return 0 00:05:10.117 19:25:56 -- setup/hugepages.sh@100 -- # resv=0 00:05:10.117 nr_hugepages=1024 00:05:10.117 19:25:56 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:10.117 resv_hugepages=0 00:05:10.117 19:25:56 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:10.117 surplus_hugepages=0 00:05:10.117 19:25:56 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:10.117 anon_hugepages=0 00:05:10.117 19:25:56 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:10.117 19:25:56 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:10.117 19:25:56 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:10.117 19:25:56 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:10.117 19:25:56 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:10.117 19:25:56 -- setup/common.sh@18 -- # local node= 00:05:10.117 19:25:56 -- setup/common.sh@19 -- # local var val 00:05:10.117 19:25:56 -- setup/common.sh@20 -- # local mem_f mem 00:05:10.117 19:25:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.117 19:25:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.117 19:25:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.117 19:25:56 -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.117 19:25:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.117 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.118 19:25:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6810100 kB' 'MemAvailable: 9445200 kB' 'Buffers: 2684 kB' 'Cached: 2836924 kB' 'SwapCached: 0 kB' 'Active: 495104 kB' 'Inactive: 2459936 kB' 'Active(anon): 125920 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2459936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117056 kB' 'Mapped: 49960 kB' 'Shmem: 10488 kB' 'KReclaimable: 85856 kB' 'Slab: 186916 kB' 'SReclaimable: 85856 kB' 'SUnreclaim: 101060 kB' 'KernelStack: 6464 kB' 'PageTables: 3928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 304936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.118 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.118 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.119 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.119 19:25:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.119 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.119 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.119 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.119 19:25:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.119 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.119 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.119 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.119 19:25:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.119 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.119 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.119 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.119 19:25:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.119 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.119 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.119 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.119 19:25:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.119 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.119 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.119 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.119 19:25:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.119 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.119 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.119 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.119 19:25:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.119 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.119 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.119 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.119 19:25:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.119 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.119 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.119 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.119 19:25:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.119 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.119 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.119 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.119 19:25:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.119 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.119 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.119 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.119 19:25:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.119 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.119 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.119 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.119 19:25:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.119 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.119 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.119 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.119 19:25:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.119 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.119 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.119 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.119 19:25:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.119 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.119 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.119 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.119 19:25:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.119 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.119 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.119 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.119 19:25:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.119 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.119 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.119 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.119 19:25:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.119 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.119 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.119 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.119 19:25:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.119 19:25:56 -- setup/common.sh@32 -- # continue 00:05:10.119 19:25:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.119 19:25:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.119 19:25:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.119 19:25:57 -- setup/common.sh@32 -- # continue 00:05:10.119 19:25:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.119 19:25:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.119 19:25:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.119 19:25:57 -- setup/common.sh@32 -- # continue 00:05:10.119 19:25:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.119 19:25:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.119 19:25:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.119 19:25:57 -- setup/common.sh@33 -- # echo 1024 00:05:10.119 19:25:57 -- setup/common.sh@33 -- # return 0 00:05:10.119 19:25:57 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:10.119 19:25:57 -- setup/hugepages.sh@112 -- # get_nodes 00:05:10.119 19:25:57 -- setup/hugepages.sh@27 -- # local node 00:05:10.119 19:25:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:10.119 19:25:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:10.119 19:25:57 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:10.119 19:25:57 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:10.119 19:25:57 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:10.119 19:25:57 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:10.379 19:25:57 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:10.379 19:25:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:10.379 19:25:57 -- setup/common.sh@18 -- # local node=0 00:05:10.379 19:25:57 -- setup/common.sh@19 -- # local var val 00:05:10.379 19:25:57 -- setup/common.sh@20 -- # local mem_f mem 00:05:10.379 19:25:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.379 19:25:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:10.379 19:25:57 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:10.379 19:25:57 -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.379 19:25:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.379 19:25:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.379 19:25:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.379 19:25:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6810100 kB' 'MemUsed: 5429012 kB' 'SwapCached: 0 kB' 'Active: 495136 kB' 'Inactive: 2459936 kB' 'Active(anon): 125952 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2459936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2839608 kB' 'Mapped: 49960 kB' 'AnonPages: 117064 kB' 'Shmem: 10488 kB' 'KernelStack: 6464 kB' 'PageTables: 3928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85856 kB' 'Slab: 186916 kB' 'SReclaimable: 85856 kB' 'SUnreclaim: 101060 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:10.379 19:25:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.379 19:25:57 -- setup/common.sh@32 -- # continue 00:05:10.379 19:25:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.379 19:25:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.379 19:25:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.379 19:25:57 -- setup/common.sh@32 -- # continue 00:05:10.379 19:25:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.379 19:25:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.379 19:25:57 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.379 19:25:57 -- setup/common.sh@32 -- # continue 00:05:10.379 19:25:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.379 19:25:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.379 19:25:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.379 19:25:57 -- setup/common.sh@32 -- # continue 00:05:10.379 19:25:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.379 19:25:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.379 19:25:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.379 19:25:57 -- setup/common.sh@32 -- # continue 00:05:10.379 19:25:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.379 19:25:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.379 19:25:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.379 19:25:57 -- setup/common.sh@32 -- # continue 00:05:10.379 19:25:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.379 19:25:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.379 19:25:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.379 19:25:57 -- setup/common.sh@32 -- # continue 00:05:10.379 19:25:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.379 19:25:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.379 19:25:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.379 19:25:57 -- setup/common.sh@32 -- # continue 00:05:10.379 19:25:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.379 19:25:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.379 19:25:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.379 19:25:57 -- setup/common.sh@32 -- # continue 00:05:10.379 19:25:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.379 19:25:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.379 19:25:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.379 19:25:57 -- setup/common.sh@32 -- # continue 00:05:10.379 19:25:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.379 19:25:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.379 19:25:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.379 19:25:57 -- setup/common.sh@32 -- # continue 00:05:10.379 19:25:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.379 19:25:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.379 19:25:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.379 19:25:57 -- setup/common.sh@32 -- # continue 00:05:10.379 19:25:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.379 19:25:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.379 19:25:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.379 19:25:57 -- setup/common.sh@32 -- # continue 00:05:10.379 19:25:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.379 19:25:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.379 19:25:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.379 19:25:57 -- setup/common.sh@32 -- # continue 00:05:10.379 19:25:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.379 19:25:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.379 19:25:57 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.379 19:25:57 -- setup/common.sh@32 -- # continue 00:05:10.379 19:25:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.379 19:25:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.379 19:25:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.379 19:25:57 -- setup/common.sh@32 -- # continue 00:05:10.379 19:25:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.379 19:25:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.379 19:25:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.379 19:25:57 -- setup/common.sh@32 -- # continue 00:05:10.379 19:25:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.379 19:25:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.379 19:25:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.379 19:25:57 -- setup/common.sh@32 -- # continue 00:05:10.379 19:25:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.379 19:25:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.379 19:25:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.379 19:25:57 -- setup/common.sh@32 -- # continue 00:05:10.379 19:25:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.379 19:25:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.379 19:25:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.379 19:25:57 -- setup/common.sh@32 -- # continue 00:05:10.379 19:25:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.379 19:25:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.379 19:25:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.380 19:25:57 -- setup/common.sh@32 -- # continue 00:05:10.380 19:25:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.380 19:25:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.380 19:25:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.380 19:25:57 -- setup/common.sh@32 -- # continue 00:05:10.380 19:25:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.380 19:25:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.380 19:25:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.380 19:25:57 -- setup/common.sh@32 -- # continue 00:05:10.380 19:25:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.380 19:25:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.380 19:25:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.380 19:25:57 -- setup/common.sh@32 -- # continue 00:05:10.380 19:25:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.380 19:25:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.380 19:25:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.380 19:25:57 -- setup/common.sh@32 -- # continue 00:05:10.380 19:25:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.380 19:25:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.380 19:25:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.380 19:25:57 -- setup/common.sh@32 -- # continue 00:05:10.380 19:25:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.380 19:25:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.380 19:25:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.380 19:25:57 -- setup/common.sh@32 -- # continue 00:05:10.380 19:25:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.380 19:25:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.380 19:25:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.380 19:25:57 -- setup/common.sh@32 -- # continue 00:05:10.380 19:25:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.380 19:25:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.380 19:25:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.380 19:25:57 -- setup/common.sh@32 -- # continue 00:05:10.380 19:25:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.380 19:25:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.380 19:25:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.380 19:25:57 -- setup/common.sh@32 -- # continue 00:05:10.380 19:25:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.380 19:25:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.380 19:25:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.380 19:25:57 -- setup/common.sh@32 -- # continue 00:05:10.380 19:25:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.380 19:25:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.380 19:25:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.380 19:25:57 -- setup/common.sh@32 -- # continue 00:05:10.380 19:25:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.380 19:25:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.380 19:25:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.380 19:25:57 -- setup/common.sh@32 -- # continue 00:05:10.380 19:25:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.380 19:25:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.380 19:25:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.380 19:25:57 -- setup/common.sh@32 -- # continue 00:05:10.380 19:25:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.380 19:25:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.380 19:25:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.380 19:25:57 -- setup/common.sh@32 -- # continue 00:05:10.380 19:25:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.380 19:25:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.380 19:25:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.380 19:25:57 -- setup/common.sh@32 -- # continue 00:05:10.380 19:25:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.380 19:25:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.380 19:25:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.380 19:25:57 -- setup/common.sh@33 -- # echo 0 00:05:10.380 19:25:57 -- setup/common.sh@33 -- # return 0 00:05:10.380 19:25:57 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:10.380 19:25:57 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:10.380 19:25:57 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:10.380 19:25:57 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:10.380 node0=1024 expecting 1024 00:05:10.380 19:25:57 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:10.380 19:25:57 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:10.380 00:05:10.380 real 0m1.121s 00:05:10.380 user 0m0.515s 00:05:10.380 sys 0m0.634s 00:05:10.380 19:25:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:10.380 19:25:57 -- common/autotest_common.sh@10 -- # set +x 00:05:10.380 ************************************ 00:05:10.380 END TEST no_shrink_alloc 00:05:10.380 ************************************ 00:05:10.380 19:25:57 -- setup/hugepages.sh@217 -- # clear_hp 00:05:10.380 19:25:57 -- setup/hugepages.sh@37 -- # local node hp 00:05:10.380 19:25:57 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:10.380 19:25:57 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:10.380 19:25:57 -- setup/hugepages.sh@41 -- # echo 0 00:05:10.380 19:25:57 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:10.380 19:25:57 -- setup/hugepages.sh@41 -- # echo 0 00:05:10.380 19:25:57 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:10.380 19:25:57 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:10.380 00:05:10.380 real 0m4.953s 00:05:10.380 user 0m2.344s 00:05:10.380 sys 0m2.700s 00:05:10.380 19:25:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:10.380 ************************************ 00:05:10.380 END TEST hugepages 00:05:10.380 19:25:57 -- common/autotest_common.sh@10 -- # set +x 00:05:10.380 ************************************ 00:05:10.380 19:25:57 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:10.380 19:25:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.380 19:25:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.380 19:25:57 -- common/autotest_common.sh@10 -- # set +x 00:05:10.380 ************************************ 00:05:10.380 START TEST driver 00:05:10.380 ************************************ 00:05:10.380 19:25:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:10.380 * Looking for test storage... 00:05:10.380 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:10.380 19:25:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:10.380 19:25:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:10.380 19:25:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:10.639 19:25:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:10.639 19:25:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:10.639 19:25:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:10.639 19:25:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:10.639 19:25:57 -- scripts/common.sh@335 -- # IFS=.-: 00:05:10.639 19:25:57 -- scripts/common.sh@335 -- # read -ra ver1 00:05:10.639 19:25:57 -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.639 19:25:57 -- scripts/common.sh@336 -- # read -ra ver2 00:05:10.639 19:25:57 -- scripts/common.sh@337 -- # local 'op=<' 00:05:10.639 19:25:57 -- scripts/common.sh@339 -- # ver1_l=2 00:05:10.639 19:25:57 -- scripts/common.sh@340 -- # ver2_l=1 00:05:10.639 19:25:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:10.639 19:25:57 -- scripts/common.sh@343 -- # case "$op" in 00:05:10.639 19:25:57 -- scripts/common.sh@344 -- # : 1 00:05:10.639 19:25:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:10.639 19:25:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.639 19:25:57 -- scripts/common.sh@364 -- # decimal 1 00:05:10.639 19:25:57 -- scripts/common.sh@352 -- # local d=1 00:05:10.639 19:25:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.639 19:25:57 -- scripts/common.sh@354 -- # echo 1 00:05:10.639 19:25:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:10.639 19:25:57 -- scripts/common.sh@365 -- # decimal 2 00:05:10.639 19:25:57 -- scripts/common.sh@352 -- # local d=2 00:05:10.639 19:25:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.639 19:25:57 -- scripts/common.sh@354 -- # echo 2 00:05:10.639 19:25:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:10.639 19:25:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:10.639 19:25:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:10.639 19:25:57 -- scripts/common.sh@367 -- # return 0 00:05:10.640 19:25:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.640 19:25:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:10.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.640 --rc genhtml_branch_coverage=1 00:05:10.640 --rc genhtml_function_coverage=1 00:05:10.640 --rc genhtml_legend=1 00:05:10.640 --rc geninfo_all_blocks=1 00:05:10.640 --rc geninfo_unexecuted_blocks=1 00:05:10.640 00:05:10.640 ' 00:05:10.640 19:25:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:10.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.640 --rc genhtml_branch_coverage=1 00:05:10.640 --rc genhtml_function_coverage=1 00:05:10.640 --rc genhtml_legend=1 00:05:10.640 --rc geninfo_all_blocks=1 00:05:10.640 --rc geninfo_unexecuted_blocks=1 00:05:10.640 00:05:10.640 ' 00:05:10.640 19:25:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:10.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.640 --rc genhtml_branch_coverage=1 00:05:10.640 --rc genhtml_function_coverage=1 00:05:10.640 --rc genhtml_legend=1 00:05:10.640 --rc geninfo_all_blocks=1 00:05:10.640 --rc geninfo_unexecuted_blocks=1 00:05:10.640 00:05:10.640 ' 00:05:10.640 19:25:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:10.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.640 --rc genhtml_branch_coverage=1 00:05:10.640 --rc genhtml_function_coverage=1 00:05:10.640 --rc genhtml_legend=1 00:05:10.640 --rc geninfo_all_blocks=1 00:05:10.640 --rc geninfo_unexecuted_blocks=1 00:05:10.640 00:05:10.640 ' 00:05:10.640 19:25:57 -- setup/driver.sh@68 -- # setup reset 00:05:10.640 19:25:57 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:10.640 19:25:57 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:11.208 19:25:57 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:11.208 19:25:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:11.208 19:25:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.208 19:25:57 -- common/autotest_common.sh@10 -- # set +x 00:05:11.208 ************************************ 00:05:11.208 START TEST guess_driver 00:05:11.208 ************************************ 00:05:11.208 19:25:57 -- common/autotest_common.sh@1114 -- # guess_driver 00:05:11.208 19:25:57 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:11.208 19:25:57 -- setup/driver.sh@47 -- # local fail=0 00:05:11.208 19:25:57 -- setup/driver.sh@49 -- # pick_driver 00:05:11.208 19:25:57 -- setup/driver.sh@36 -- # vfio 00:05:11.208 19:25:57 -- setup/driver.sh@21 -- # local iommu_grups 00:05:11.208 19:25:57 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:11.208 19:25:57 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:11.208 19:25:57 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:11.208 19:25:57 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:11.208 19:25:57 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:11.208 19:25:57 -- setup/driver.sh@32 -- # return 1 00:05:11.208 19:25:57 -- setup/driver.sh@38 -- # uio 00:05:11.208 19:25:57 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:11.208 19:25:57 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:11.208 19:25:57 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:11.208 19:25:57 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:11.208 19:25:57 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:11.208 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:11.208 19:25:57 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:11.208 19:25:57 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:11.208 19:25:57 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:11.208 Looking for driver=uio_pci_generic 00:05:11.208 19:25:57 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:11.208 19:25:57 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:11.208 19:25:57 -- setup/driver.sh@45 -- # setup output config 00:05:11.208 19:25:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.208 19:25:57 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:11.775 19:25:58 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:11.775 19:25:58 -- setup/driver.sh@58 -- # continue 00:05:11.775 19:25:58 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:11.775 19:25:58 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:11.776 19:25:58 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:11.776 19:25:58 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.034 19:25:58 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.034 19:25:58 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:12.034 19:25:58 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.034 19:25:58 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:12.034 19:25:58 -- setup/driver.sh@65 -- # setup reset 00:05:12.034 19:25:58 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:12.034 19:25:58 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:12.602 ************************************ 00:05:12.602 END TEST guess_driver 00:05:12.602 ************************************ 00:05:12.602 00:05:12.602 real 0m1.456s 00:05:12.602 user 0m0.565s 00:05:12.602 sys 0m0.890s 00:05:12.602 19:25:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:12.602 19:25:59 -- common/autotest_common.sh@10 -- # set +x 00:05:12.602 00:05:12.602 real 0m2.254s 00:05:12.602 user 0m0.883s 00:05:12.602 sys 0m1.437s 00:05:12.602 19:25:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:12.602 19:25:59 -- common/autotest_common.sh@10 -- # set +x 00:05:12.602 ************************************ 00:05:12.603 END TEST driver 00:05:12.603 ************************************ 00:05:12.603 19:25:59 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:12.603 19:25:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:12.603 19:25:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:12.603 19:25:59 -- common/autotest_common.sh@10 -- # set +x 00:05:12.603 ************************************ 00:05:12.603 START TEST devices 00:05:12.603 ************************************ 00:05:12.603 19:25:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:12.862 * Looking for test storage... 00:05:12.862 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:12.862 19:25:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:12.862 19:25:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:12.862 19:25:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:12.862 19:25:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:12.862 19:25:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:12.862 19:25:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:12.862 19:25:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:12.862 19:25:59 -- scripts/common.sh@335 -- # IFS=.-: 00:05:12.862 19:25:59 -- scripts/common.sh@335 -- # read -ra ver1 00:05:12.862 19:25:59 -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.862 19:25:59 -- scripts/common.sh@336 -- # read -ra ver2 00:05:12.862 19:25:59 -- scripts/common.sh@337 -- # local 'op=<' 00:05:12.862 19:25:59 -- scripts/common.sh@339 -- # ver1_l=2 00:05:12.862 19:25:59 -- scripts/common.sh@340 -- # ver2_l=1 00:05:12.862 19:25:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:12.862 19:25:59 -- scripts/common.sh@343 -- # case "$op" in 00:05:12.862 19:25:59 -- scripts/common.sh@344 -- # : 1 00:05:12.862 19:25:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:12.862 19:25:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.862 19:25:59 -- scripts/common.sh@364 -- # decimal 1 00:05:12.862 19:25:59 -- scripts/common.sh@352 -- # local d=1 00:05:12.862 19:25:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.862 19:25:59 -- scripts/common.sh@354 -- # echo 1 00:05:12.862 19:25:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:12.862 19:25:59 -- scripts/common.sh@365 -- # decimal 2 00:05:12.862 19:25:59 -- scripts/common.sh@352 -- # local d=2 00:05:12.862 19:25:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.862 19:25:59 -- scripts/common.sh@354 -- # echo 2 00:05:12.862 19:25:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:12.862 19:25:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:12.862 19:25:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:12.862 19:25:59 -- scripts/common.sh@367 -- # return 0 00:05:12.862 19:25:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.862 19:25:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:12.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.862 --rc genhtml_branch_coverage=1 00:05:12.862 --rc genhtml_function_coverage=1 00:05:12.862 --rc genhtml_legend=1 00:05:12.862 --rc geninfo_all_blocks=1 00:05:12.862 --rc geninfo_unexecuted_blocks=1 00:05:12.862 00:05:12.862 ' 00:05:12.862 19:25:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:12.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.862 --rc genhtml_branch_coverage=1 00:05:12.862 --rc genhtml_function_coverage=1 00:05:12.862 --rc genhtml_legend=1 00:05:12.862 --rc geninfo_all_blocks=1 00:05:12.862 --rc geninfo_unexecuted_blocks=1 00:05:12.862 00:05:12.862 ' 00:05:12.862 19:25:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:12.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.862 --rc genhtml_branch_coverage=1 00:05:12.862 --rc genhtml_function_coverage=1 00:05:12.862 --rc genhtml_legend=1 00:05:12.862 --rc geninfo_all_blocks=1 00:05:12.862 --rc geninfo_unexecuted_blocks=1 00:05:12.862 00:05:12.862 ' 00:05:12.862 19:25:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:12.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.862 --rc genhtml_branch_coverage=1 00:05:12.862 --rc genhtml_function_coverage=1 00:05:12.862 --rc genhtml_legend=1 00:05:12.862 --rc geninfo_all_blocks=1 00:05:12.862 --rc geninfo_unexecuted_blocks=1 00:05:12.862 00:05:12.862 ' 00:05:12.862 19:25:59 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:12.862 19:25:59 -- setup/devices.sh@192 -- # setup reset 00:05:12.862 19:25:59 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:12.862 19:25:59 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:13.799 19:26:00 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:13.799 19:26:00 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:13.799 19:26:00 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:13.799 19:26:00 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:13.799 19:26:00 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:13.799 19:26:00 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:13.799 19:26:00 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:13.799 19:26:00 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:13.799 19:26:00 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:13.799 19:26:00 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:13.799 19:26:00 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:13.799 19:26:00 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:13.799 19:26:00 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:13.799 19:26:00 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:13.799 19:26:00 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:13.799 19:26:00 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:13.799 19:26:00 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:13.799 19:26:00 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:13.799 19:26:00 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:13.799 19:26:00 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:13.799 19:26:00 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:13.799 19:26:00 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:13.799 19:26:00 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:13.799 19:26:00 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:13.799 19:26:00 -- setup/devices.sh@196 -- # blocks=() 00:05:13.799 19:26:00 -- setup/devices.sh@196 -- # declare -a blocks 00:05:13.799 19:26:00 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:13.799 19:26:00 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:13.799 19:26:00 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:13.799 19:26:00 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:13.799 19:26:00 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:13.799 19:26:00 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:13.799 19:26:00 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:05:13.799 19:26:00 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:13.799 19:26:00 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:13.799 19:26:00 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:13.799 19:26:00 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:13.799 No valid GPT data, bailing 00:05:13.799 19:26:00 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:13.799 19:26:00 -- scripts/common.sh@393 -- # pt= 00:05:13.799 19:26:00 -- scripts/common.sh@394 -- # return 1 00:05:13.800 19:26:00 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:13.800 19:26:00 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:13.800 19:26:00 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:13.800 19:26:00 -- setup/common.sh@80 -- # echo 5368709120 00:05:13.800 19:26:00 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:13.800 19:26:00 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:13.800 19:26:00 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:05:13.800 19:26:00 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:13.800 19:26:00 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:13.800 19:26:00 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:13.800 19:26:00 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:13.800 19:26:00 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:13.800 19:26:00 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:13.800 19:26:00 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:05:13.800 19:26:00 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:13.800 No valid GPT data, bailing 00:05:13.800 19:26:00 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:13.800 19:26:00 -- scripts/common.sh@393 -- # pt= 00:05:13.800 19:26:00 -- scripts/common.sh@394 -- # return 1 00:05:13.800 19:26:00 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:13.800 19:26:00 -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:13.800 19:26:00 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:13.800 19:26:00 -- setup/common.sh@80 -- # echo 4294967296 00:05:13.800 19:26:00 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:13.800 19:26:00 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:13.800 19:26:00 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:13.800 19:26:00 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:13.800 19:26:00 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:05:13.800 19:26:00 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:13.800 19:26:00 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:13.800 19:26:00 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:13.800 19:26:00 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:05:13.800 19:26:00 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:05:13.800 19:26:00 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:05:13.800 No valid GPT data, bailing 00:05:13.800 19:26:00 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:13.800 19:26:00 -- scripts/common.sh@393 -- # pt= 00:05:13.800 19:26:00 -- scripts/common.sh@394 -- # return 1 00:05:13.800 19:26:00 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:05:13.800 19:26:00 -- setup/common.sh@76 -- # local dev=nvme1n2 00:05:13.800 19:26:00 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:05:13.800 19:26:00 -- setup/common.sh@80 -- # echo 4294967296 00:05:13.800 19:26:00 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:13.800 19:26:00 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:13.800 19:26:00 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:13.800 19:26:00 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:13.800 19:26:00 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:05:13.800 19:26:00 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:13.800 19:26:00 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:13.800 19:26:00 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:13.800 19:26:00 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:05:13.800 19:26:00 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:05:13.800 19:26:00 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:05:14.058 No valid GPT data, bailing 00:05:14.059 19:26:00 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:14.059 19:26:00 -- scripts/common.sh@393 -- # pt= 00:05:14.059 19:26:00 -- scripts/common.sh@394 -- # return 1 00:05:14.059 19:26:00 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:05:14.059 19:26:00 -- setup/common.sh@76 -- # local dev=nvme1n3 00:05:14.059 19:26:00 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:05:14.059 19:26:00 -- setup/common.sh@80 -- # echo 4294967296 00:05:14.059 19:26:00 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:14.059 19:26:00 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:14.059 19:26:00 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:14.059 19:26:00 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:14.059 19:26:00 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:14.059 19:26:00 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:14.059 19:26:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:14.059 19:26:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:14.059 19:26:00 -- common/autotest_common.sh@10 -- # set +x 00:05:14.059 ************************************ 00:05:14.059 START TEST nvme_mount 00:05:14.059 ************************************ 00:05:14.059 19:26:00 -- common/autotest_common.sh@1114 -- # nvme_mount 00:05:14.059 19:26:00 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:14.059 19:26:00 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:14.059 19:26:00 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:14.059 19:26:00 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:14.059 19:26:00 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:14.059 19:26:00 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:14.059 19:26:00 -- setup/common.sh@40 -- # local part_no=1 00:05:14.059 19:26:00 -- setup/common.sh@41 -- # local size=1073741824 00:05:14.059 19:26:00 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:14.059 19:26:00 -- setup/common.sh@44 -- # parts=() 00:05:14.059 19:26:00 -- setup/common.sh@44 -- # local parts 00:05:14.059 19:26:00 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:14.059 19:26:00 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:14.059 19:26:00 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:14.059 19:26:00 -- setup/common.sh@46 -- # (( part++ )) 00:05:14.059 19:26:00 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:14.059 19:26:00 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:14.059 19:26:00 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:14.059 19:26:00 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:14.994 Creating new GPT entries in memory. 00:05:14.994 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:14.994 other utilities. 00:05:14.994 19:26:01 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:14.994 19:26:01 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:14.994 19:26:01 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:14.995 19:26:01 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:14.995 19:26:01 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:15.931 Creating new GPT entries in memory. 00:05:15.931 The operation has completed successfully. 00:05:15.931 19:26:02 -- setup/common.sh@57 -- # (( part++ )) 00:05:15.931 19:26:02 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:15.931 19:26:02 -- setup/common.sh@62 -- # wait 65532 00:05:16.190 19:26:02 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:16.190 19:26:02 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:16.190 19:26:02 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:16.190 19:26:02 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:16.190 19:26:02 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:16.190 19:26:02 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:16.190 19:26:02 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:16.190 19:26:02 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:16.190 19:26:02 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:16.190 19:26:02 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:16.190 19:26:02 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:16.190 19:26:02 -- setup/devices.sh@53 -- # local found=0 00:05:16.190 19:26:02 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:16.190 19:26:02 -- setup/devices.sh@56 -- # : 00:05:16.190 19:26:02 -- setup/devices.sh@59 -- # local pci status 00:05:16.190 19:26:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.190 19:26:02 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:16.190 19:26:02 -- setup/devices.sh@47 -- # setup output config 00:05:16.190 19:26:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:16.190 19:26:02 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:16.190 19:26:03 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:16.190 19:26:03 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:16.190 19:26:03 -- setup/devices.sh@63 -- # found=1 00:05:16.190 19:26:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.191 19:26:03 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:16.191 19:26:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.758 19:26:03 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:16.758 19:26:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.758 19:26:03 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:16.758 19:26:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.758 19:26:03 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:16.758 19:26:03 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:16.758 19:26:03 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:16.758 19:26:03 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:16.758 19:26:03 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:16.758 19:26:03 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:16.758 19:26:03 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:16.758 19:26:03 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:16.758 19:26:03 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:16.758 19:26:03 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:16.758 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:16.758 19:26:03 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:16.758 19:26:03 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:17.018 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:17.018 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:17.018 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:17.019 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:17.019 19:26:03 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:17.019 19:26:03 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:17.019 19:26:03 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:17.019 19:26:03 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:17.019 19:26:03 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:17.019 19:26:03 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:17.019 19:26:03 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:17.019 19:26:03 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:17.019 19:26:03 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:17.019 19:26:03 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:17.019 19:26:03 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:17.019 19:26:03 -- setup/devices.sh@53 -- # local found=0 00:05:17.019 19:26:03 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:17.019 19:26:03 -- setup/devices.sh@56 -- # : 00:05:17.019 19:26:03 -- setup/devices.sh@59 -- # local pci status 00:05:17.019 19:26:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.019 19:26:03 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:17.019 19:26:03 -- setup/devices.sh@47 -- # setup output config 00:05:17.019 19:26:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:17.019 19:26:03 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:17.278 19:26:04 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:17.278 19:26:04 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:17.278 19:26:04 -- setup/devices.sh@63 -- # found=1 00:05:17.278 19:26:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.278 19:26:04 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:17.278 19:26:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.537 19:26:04 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:17.537 19:26:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.796 19:26:04 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:17.796 19:26:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.796 19:26:04 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:17.796 19:26:04 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:17.796 19:26:04 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:17.796 19:26:04 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:17.796 19:26:04 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:17.796 19:26:04 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:17.796 19:26:04 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:05:17.796 19:26:04 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:17.796 19:26:04 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:17.796 19:26:04 -- setup/devices.sh@50 -- # local mount_point= 00:05:17.796 19:26:04 -- setup/devices.sh@51 -- # local test_file= 00:05:17.796 19:26:04 -- setup/devices.sh@53 -- # local found=0 00:05:17.796 19:26:04 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:17.796 19:26:04 -- setup/devices.sh@59 -- # local pci status 00:05:17.796 19:26:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.796 19:26:04 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:17.796 19:26:04 -- setup/devices.sh@47 -- # setup output config 00:05:17.796 19:26:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:17.796 19:26:04 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:18.055 19:26:04 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:18.055 19:26:04 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:18.055 19:26:04 -- setup/devices.sh@63 -- # found=1 00:05:18.055 19:26:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.055 19:26:04 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:18.055 19:26:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.320 19:26:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:18.320 19:26:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.583 19:26:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:18.583 19:26:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.583 19:26:05 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:18.583 19:26:05 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:18.583 19:26:05 -- setup/devices.sh@68 -- # return 0 00:05:18.583 19:26:05 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:18.583 19:26:05 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:18.583 19:26:05 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:18.583 19:26:05 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:18.583 19:26:05 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:18.583 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:18.583 00:05:18.583 real 0m4.583s 00:05:18.583 user 0m1.023s 00:05:18.583 sys 0m1.227s 00:05:18.583 19:26:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:18.583 19:26:05 -- common/autotest_common.sh@10 -- # set +x 00:05:18.583 ************************************ 00:05:18.583 END TEST nvme_mount 00:05:18.583 ************************************ 00:05:18.583 19:26:05 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:18.583 19:26:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:18.583 19:26:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.583 19:26:05 -- common/autotest_common.sh@10 -- # set +x 00:05:18.583 ************************************ 00:05:18.583 START TEST dm_mount 00:05:18.583 ************************************ 00:05:18.583 19:26:05 -- common/autotest_common.sh@1114 -- # dm_mount 00:05:18.583 19:26:05 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:18.583 19:26:05 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:18.583 19:26:05 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:18.583 19:26:05 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:18.583 19:26:05 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:18.583 19:26:05 -- setup/common.sh@40 -- # local part_no=2 00:05:18.583 19:26:05 -- setup/common.sh@41 -- # local size=1073741824 00:05:18.583 19:26:05 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:18.583 19:26:05 -- setup/common.sh@44 -- # parts=() 00:05:18.583 19:26:05 -- setup/common.sh@44 -- # local parts 00:05:18.583 19:26:05 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:18.583 19:26:05 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:18.583 19:26:05 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:18.583 19:26:05 -- setup/common.sh@46 -- # (( part++ )) 00:05:18.583 19:26:05 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:18.583 19:26:05 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:18.583 19:26:05 -- setup/common.sh@46 -- # (( part++ )) 00:05:18.583 19:26:05 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:18.583 19:26:05 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:18.583 19:26:05 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:18.583 19:26:05 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:19.518 Creating new GPT entries in memory. 00:05:19.518 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:19.518 other utilities. 00:05:19.518 19:26:06 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:19.518 19:26:06 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:19.518 19:26:06 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:19.518 19:26:06 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:19.518 19:26:06 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:20.894 Creating new GPT entries in memory. 00:05:20.894 The operation has completed successfully. 00:05:20.894 19:26:07 -- setup/common.sh@57 -- # (( part++ )) 00:05:20.894 19:26:07 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:20.894 19:26:07 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:20.894 19:26:07 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:20.894 19:26:07 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:21.830 The operation has completed successfully. 00:05:21.830 19:26:08 -- setup/common.sh@57 -- # (( part++ )) 00:05:21.830 19:26:08 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:21.830 19:26:08 -- setup/common.sh@62 -- # wait 65997 00:05:21.830 19:26:08 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:21.830 19:26:08 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:21.830 19:26:08 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:21.830 19:26:08 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:21.830 19:26:08 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:21.830 19:26:08 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:21.830 19:26:08 -- setup/devices.sh@161 -- # break 00:05:21.830 19:26:08 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:21.830 19:26:08 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:21.830 19:26:08 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:21.830 19:26:08 -- setup/devices.sh@166 -- # dm=dm-0 00:05:21.830 19:26:08 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:21.830 19:26:08 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:21.830 19:26:08 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:21.830 19:26:08 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:21.830 19:26:08 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:21.830 19:26:08 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:21.830 19:26:08 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:21.830 19:26:08 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:21.830 19:26:08 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:21.830 19:26:08 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:21.830 19:26:08 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:21.830 19:26:08 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:21.830 19:26:08 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:21.830 19:26:08 -- setup/devices.sh@53 -- # local found=0 00:05:21.830 19:26:08 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:21.830 19:26:08 -- setup/devices.sh@56 -- # : 00:05:21.830 19:26:08 -- setup/devices.sh@59 -- # local pci status 00:05:21.830 19:26:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.830 19:26:08 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:21.830 19:26:08 -- setup/devices.sh@47 -- # setup output config 00:05:21.830 19:26:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.830 19:26:08 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:22.089 19:26:08 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:22.089 19:26:08 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:22.089 19:26:08 -- setup/devices.sh@63 -- # found=1 00:05:22.089 19:26:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.089 19:26:08 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:22.089 19:26:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.348 19:26:09 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:22.348 19:26:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.348 19:26:09 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:22.348 19:26:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.348 19:26:09 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:22.348 19:26:09 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:22.348 19:26:09 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:22.348 19:26:09 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:22.348 19:26:09 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:22.348 19:26:09 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:22.348 19:26:09 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:22.348 19:26:09 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:22.348 19:26:09 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:22.348 19:26:09 -- setup/devices.sh@50 -- # local mount_point= 00:05:22.348 19:26:09 -- setup/devices.sh@51 -- # local test_file= 00:05:22.348 19:26:09 -- setup/devices.sh@53 -- # local found=0 00:05:22.348 19:26:09 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:22.348 19:26:09 -- setup/devices.sh@59 -- # local pci status 00:05:22.348 19:26:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.348 19:26:09 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:22.348 19:26:09 -- setup/devices.sh@47 -- # setup output config 00:05:22.348 19:26:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:22.348 19:26:09 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:22.607 19:26:09 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:22.607 19:26:09 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:22.607 19:26:09 -- setup/devices.sh@63 -- # found=1 00:05:22.607 19:26:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.607 19:26:09 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:22.607 19:26:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.865 19:26:09 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:22.865 19:26:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.124 19:26:09 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:23.124 19:26:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.124 19:26:09 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:23.124 19:26:09 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:23.124 19:26:09 -- setup/devices.sh@68 -- # return 0 00:05:23.124 19:26:09 -- setup/devices.sh@187 -- # cleanup_dm 00:05:23.124 19:26:09 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:23.124 19:26:09 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:23.124 19:26:09 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:23.124 19:26:09 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:23.124 19:26:09 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:23.124 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:23.124 19:26:09 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:23.124 19:26:09 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:23.124 00:05:23.124 real 0m4.573s 00:05:23.124 user 0m0.677s 00:05:23.124 sys 0m0.813s 00:05:23.124 19:26:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:23.124 19:26:09 -- common/autotest_common.sh@10 -- # set +x 00:05:23.124 ************************************ 00:05:23.124 END TEST dm_mount 00:05:23.124 ************************************ 00:05:23.124 19:26:09 -- setup/devices.sh@1 -- # cleanup 00:05:23.124 19:26:09 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:23.124 19:26:09 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:23.124 19:26:09 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:23.124 19:26:09 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:23.125 19:26:10 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:23.125 19:26:10 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:23.383 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:23.383 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:23.383 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:23.383 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:23.383 19:26:10 -- setup/devices.sh@12 -- # cleanup_dm 00:05:23.383 19:26:10 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:23.383 19:26:10 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:23.383 19:26:10 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:23.641 19:26:10 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:23.641 19:26:10 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:23.641 19:26:10 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:23.641 00:05:23.641 real 0m10.848s 00:05:23.641 user 0m2.456s 00:05:23.641 sys 0m2.664s 00:05:23.641 19:26:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:23.641 ************************************ 00:05:23.641 END TEST devices 00:05:23.641 19:26:10 -- common/autotest_common.sh@10 -- # set +x 00:05:23.641 ************************************ 00:05:23.641 00:05:23.641 real 0m22.859s 00:05:23.641 user 0m7.755s 00:05:23.642 sys 0m9.520s 00:05:23.642 19:26:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:23.642 19:26:10 -- common/autotest_common.sh@10 -- # set +x 00:05:23.642 ************************************ 00:05:23.642 END TEST setup.sh 00:05:23.642 ************************************ 00:05:23.642 19:26:10 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:23.642 Hugepages 00:05:23.642 node hugesize free / total 00:05:23.642 node0 1048576kB 0 / 0 00:05:23.642 node0 2048kB 2048 / 2048 00:05:23.642 00:05:23.642 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:23.900 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:23.900 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:23.900 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:23.900 19:26:10 -- spdk/autotest.sh@128 -- # uname -s 00:05:23.900 19:26:10 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:05:23.900 19:26:10 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:05:23.900 19:26:10 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:24.835 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:24.835 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:24.835 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:24.835 19:26:11 -- common/autotest_common.sh@1527 -- # sleep 1 00:05:25.813 19:26:12 -- common/autotest_common.sh@1528 -- # bdfs=() 00:05:25.813 19:26:12 -- common/autotest_common.sh@1528 -- # local bdfs 00:05:25.813 19:26:12 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:05:25.813 19:26:12 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:05:25.813 19:26:12 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:25.813 19:26:12 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:25.813 19:26:12 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:25.813 19:26:12 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:25.813 19:26:12 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:25.813 19:26:12 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:05:25.813 19:26:12 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:25.813 19:26:12 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:26.381 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:26.381 Waiting for block devices as requested 00:05:26.381 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:26.381 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:05:26.381 19:26:13 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:26.381 19:26:13 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:26.381 19:26:13 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:05:26.381 19:26:13 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:26.381 19:26:13 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:26.381 19:26:13 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:05:26.381 19:26:13 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:26.381 19:26:13 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:05:26.381 19:26:13 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:05:26.381 19:26:13 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:05:26.381 19:26:13 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:26.381 19:26:13 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:26.381 19:26:13 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:26.640 19:26:13 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:26.640 19:26:13 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:26.640 19:26:13 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:26.640 19:26:13 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:26.640 19:26:13 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:26.640 19:26:13 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:05:26.640 19:26:13 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:26.640 19:26:13 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:26.640 19:26:13 -- common/autotest_common.sh@1552 -- # continue 00:05:26.640 19:26:13 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:26.640 19:26:13 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:05:26.640 19:26:13 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:26.640 19:26:13 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:05:26.640 19:26:13 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:26.640 19:26:13 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:05:26.640 19:26:13 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:26.640 19:26:13 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:05:26.641 19:26:13 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:05:26.641 19:26:13 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:05:26.641 19:26:13 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:26.641 19:26:13 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:26.641 19:26:13 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:26.641 19:26:13 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:26.641 19:26:13 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:26.641 19:26:13 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:26.641 19:26:13 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:05:26.641 19:26:13 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:26.641 19:26:13 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:26.641 19:26:13 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:26.641 19:26:13 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:26.641 19:26:13 -- common/autotest_common.sh@1552 -- # continue 00:05:26.641 19:26:13 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:05:26.641 19:26:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:26.641 19:26:13 -- common/autotest_common.sh@10 -- # set +x 00:05:26.641 19:26:13 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:05:26.641 19:26:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:26.641 19:26:13 -- common/autotest_common.sh@10 -- # set +x 00:05:26.641 19:26:13 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:27.208 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:27.467 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:27.467 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:27.467 19:26:14 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:05:27.467 19:26:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:27.467 19:26:14 -- common/autotest_common.sh@10 -- # set +x 00:05:27.467 19:26:14 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:05:27.467 19:26:14 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:05:27.467 19:26:14 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:05:27.467 19:26:14 -- common/autotest_common.sh@1572 -- # bdfs=() 00:05:27.467 19:26:14 -- common/autotest_common.sh@1572 -- # local bdfs 00:05:27.467 19:26:14 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:05:27.467 19:26:14 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:27.467 19:26:14 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:27.467 19:26:14 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:27.467 19:26:14 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:27.467 19:26:14 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:27.467 19:26:14 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:05:27.467 19:26:14 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:27.467 19:26:14 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:27.467 19:26:14 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:27.467 19:26:14 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:27.467 19:26:14 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:27.467 19:26:14 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:27.467 19:26:14 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:05:27.726 19:26:14 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:27.726 19:26:14 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:27.726 19:26:14 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:05:27.726 19:26:14 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:05:27.726 19:26:14 -- common/autotest_common.sh@1588 -- # return 0 00:05:27.726 19:26:14 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:05:27.726 19:26:14 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:05:27.726 19:26:14 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:27.726 19:26:14 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:27.726 19:26:14 -- spdk/autotest.sh@160 -- # timing_enter lib 00:05:27.726 19:26:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:27.726 19:26:14 -- common/autotest_common.sh@10 -- # set +x 00:05:27.726 19:26:14 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:27.726 19:26:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:27.726 19:26:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.726 19:26:14 -- common/autotest_common.sh@10 -- # set +x 00:05:27.726 ************************************ 00:05:27.726 START TEST env 00:05:27.726 ************************************ 00:05:27.726 19:26:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:27.726 * Looking for test storage... 00:05:27.726 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:27.726 19:26:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:27.726 19:26:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:27.726 19:26:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:27.726 19:26:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:27.726 19:26:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:27.726 19:26:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:27.726 19:26:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:27.726 19:26:14 -- scripts/common.sh@335 -- # IFS=.-: 00:05:27.726 19:26:14 -- scripts/common.sh@335 -- # read -ra ver1 00:05:27.726 19:26:14 -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.726 19:26:14 -- scripts/common.sh@336 -- # read -ra ver2 00:05:27.726 19:26:14 -- scripts/common.sh@337 -- # local 'op=<' 00:05:27.726 19:26:14 -- scripts/common.sh@339 -- # ver1_l=2 00:05:27.726 19:26:14 -- scripts/common.sh@340 -- # ver2_l=1 00:05:27.726 19:26:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:27.726 19:26:14 -- scripts/common.sh@343 -- # case "$op" in 00:05:27.726 19:26:14 -- scripts/common.sh@344 -- # : 1 00:05:27.726 19:26:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:27.726 19:26:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.726 19:26:14 -- scripts/common.sh@364 -- # decimal 1 00:05:27.726 19:26:14 -- scripts/common.sh@352 -- # local d=1 00:05:27.726 19:26:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.726 19:26:14 -- scripts/common.sh@354 -- # echo 1 00:05:27.726 19:26:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:27.726 19:26:14 -- scripts/common.sh@365 -- # decimal 2 00:05:27.727 19:26:14 -- scripts/common.sh@352 -- # local d=2 00:05:27.727 19:26:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.727 19:26:14 -- scripts/common.sh@354 -- # echo 2 00:05:27.727 19:26:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:27.727 19:26:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:27.727 19:26:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:27.727 19:26:14 -- scripts/common.sh@367 -- # return 0 00:05:27.727 19:26:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.727 19:26:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:27.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.727 --rc genhtml_branch_coverage=1 00:05:27.727 --rc genhtml_function_coverage=1 00:05:27.727 --rc genhtml_legend=1 00:05:27.727 --rc geninfo_all_blocks=1 00:05:27.727 --rc geninfo_unexecuted_blocks=1 00:05:27.727 00:05:27.727 ' 00:05:27.727 19:26:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:27.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.727 --rc genhtml_branch_coverage=1 00:05:27.727 --rc genhtml_function_coverage=1 00:05:27.727 --rc genhtml_legend=1 00:05:27.727 --rc geninfo_all_blocks=1 00:05:27.727 --rc geninfo_unexecuted_blocks=1 00:05:27.727 00:05:27.727 ' 00:05:27.727 19:26:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:27.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.727 --rc genhtml_branch_coverage=1 00:05:27.727 --rc genhtml_function_coverage=1 00:05:27.727 --rc genhtml_legend=1 00:05:27.727 --rc geninfo_all_blocks=1 00:05:27.727 --rc geninfo_unexecuted_blocks=1 00:05:27.727 00:05:27.727 ' 00:05:27.727 19:26:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:27.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.727 --rc genhtml_branch_coverage=1 00:05:27.727 --rc genhtml_function_coverage=1 00:05:27.727 --rc genhtml_legend=1 00:05:27.727 --rc geninfo_all_blocks=1 00:05:27.727 --rc geninfo_unexecuted_blocks=1 00:05:27.727 00:05:27.727 ' 00:05:27.727 19:26:14 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:27.727 19:26:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:27.727 19:26:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.727 19:26:14 -- common/autotest_common.sh@10 -- # set +x 00:05:27.727 ************************************ 00:05:27.727 START TEST env_memory 00:05:27.727 ************************************ 00:05:27.727 19:26:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:27.727 00:05:27.727 00:05:27.727 CUnit - A unit testing framework for C - Version 2.1-3 00:05:27.727 http://cunit.sourceforge.net/ 00:05:27.727 00:05:27.727 00:05:27.727 Suite: memory 00:05:27.986 Test: alloc and free memory map ...[2024-12-15 19:26:14.632159] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:27.986 passed 00:05:27.986 Test: mem map translation ...[2024-12-15 19:26:14.663481] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:27.986 [2024-12-15 19:26:14.663531] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:27.986 [2024-12-15 19:26:14.663600] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:27.986 [2024-12-15 19:26:14.663611] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:27.986 passed 00:05:27.986 Test: mem map registration ...[2024-12-15 19:26:14.727606] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:27.986 [2024-12-15 19:26:14.727646] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:27.986 passed 00:05:27.986 Test: mem map adjacent registrations ...passed 00:05:27.986 00:05:27.986 Run Summary: Type Total Ran Passed Failed Inactive 00:05:27.986 suites 1 1 n/a 0 0 00:05:27.986 tests 4 4 4 0 0 00:05:27.986 asserts 152 152 152 0 n/a 00:05:27.986 00:05:27.986 Elapsed time = 0.214 seconds 00:05:27.986 00:05:27.986 real 0m0.232s 00:05:27.986 user 0m0.215s 00:05:27.986 sys 0m0.013s 00:05:27.986 19:26:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:27.986 19:26:14 -- common/autotest_common.sh@10 -- # set +x 00:05:27.986 ************************************ 00:05:27.986 END TEST env_memory 00:05:27.986 ************************************ 00:05:27.986 19:26:14 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:27.986 19:26:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:27.986 19:26:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.986 19:26:14 -- common/autotest_common.sh@10 -- # set +x 00:05:27.986 ************************************ 00:05:27.986 START TEST env_vtophys 00:05:27.986 ************************************ 00:05:27.986 19:26:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:28.246 EAL: lib.eal log level changed from notice to debug 00:05:28.246 EAL: Detected lcore 0 as core 0 on socket 0 00:05:28.246 EAL: Detected lcore 1 as core 0 on socket 0 00:05:28.246 EAL: Detected lcore 2 as core 0 on socket 0 00:05:28.246 EAL: Detected lcore 3 as core 0 on socket 0 00:05:28.246 EAL: Detected lcore 4 as core 0 on socket 0 00:05:28.246 EAL: Detected lcore 5 as core 0 on socket 0 00:05:28.246 EAL: Detected lcore 6 as core 0 on socket 0 00:05:28.246 EAL: Detected lcore 7 as core 0 on socket 0 00:05:28.246 EAL: Detected lcore 8 as core 0 on socket 0 00:05:28.246 EAL: Detected lcore 9 as core 0 on socket 0 00:05:28.246 EAL: Maximum logical cores by configuration: 128 00:05:28.246 EAL: Detected CPU lcores: 10 00:05:28.246 EAL: Detected NUMA nodes: 1 00:05:28.246 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:28.246 EAL: Detected shared linkage of DPDK 00:05:28.246 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:28.246 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:28.246 EAL: Registered [vdev] bus. 00:05:28.246 EAL: bus.vdev log level changed from disabled to notice 00:05:28.246 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:28.246 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:28.246 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:28.246 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:28.246 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:28.246 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:28.246 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:28.246 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:28.246 EAL: No shared files mode enabled, IPC will be disabled 00:05:28.246 EAL: No shared files mode enabled, IPC is disabled 00:05:28.246 EAL: Selected IOVA mode 'PA' 00:05:28.246 EAL: Probing VFIO support... 00:05:28.246 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:28.246 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:28.246 EAL: Ask a virtual area of 0x2e000 bytes 00:05:28.246 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:28.246 EAL: Setting up physically contiguous memory... 00:05:28.246 EAL: Setting maximum number of open files to 524288 00:05:28.246 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:28.246 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:28.246 EAL: Ask a virtual area of 0x61000 bytes 00:05:28.246 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:28.246 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:28.246 EAL: Ask a virtual area of 0x400000000 bytes 00:05:28.246 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:28.246 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:28.246 EAL: Ask a virtual area of 0x61000 bytes 00:05:28.246 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:28.246 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:28.246 EAL: Ask a virtual area of 0x400000000 bytes 00:05:28.246 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:28.246 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:28.246 EAL: Ask a virtual area of 0x61000 bytes 00:05:28.246 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:28.246 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:28.246 EAL: Ask a virtual area of 0x400000000 bytes 00:05:28.246 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:28.246 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:28.246 EAL: Ask a virtual area of 0x61000 bytes 00:05:28.246 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:28.246 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:28.246 EAL: Ask a virtual area of 0x400000000 bytes 00:05:28.246 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:28.246 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:28.246 EAL: Hugepages will be freed exactly as allocated. 00:05:28.246 EAL: No shared files mode enabled, IPC is disabled 00:05:28.246 EAL: No shared files mode enabled, IPC is disabled 00:05:28.246 EAL: TSC frequency is ~2200000 KHz 00:05:28.246 EAL: Main lcore 0 is ready (tid=7f88e2514a00;cpuset=[0]) 00:05:28.246 EAL: Trying to obtain current memory policy. 00:05:28.246 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.246 EAL: Restoring previous memory policy: 0 00:05:28.246 EAL: request: mp_malloc_sync 00:05:28.246 EAL: No shared files mode enabled, IPC is disabled 00:05:28.246 EAL: Heap on socket 0 was expanded by 2MB 00:05:28.246 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:28.246 EAL: No shared files mode enabled, IPC is disabled 00:05:28.246 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:28.246 EAL: Mem event callback 'spdk:(nil)' registered 00:05:28.246 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:28.246 00:05:28.246 00:05:28.246 CUnit - A unit testing framework for C - Version 2.1-3 00:05:28.246 http://cunit.sourceforge.net/ 00:05:28.246 00:05:28.246 00:05:28.246 Suite: components_suite 00:05:28.246 Test: vtophys_malloc_test ...passed 00:05:28.246 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:28.246 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.246 EAL: Restoring previous memory policy: 4 00:05:28.246 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.246 EAL: request: mp_malloc_sync 00:05:28.246 EAL: No shared files mode enabled, IPC is disabled 00:05:28.246 EAL: Heap on socket 0 was expanded by 4MB 00:05:28.246 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.246 EAL: request: mp_malloc_sync 00:05:28.246 EAL: No shared files mode enabled, IPC is disabled 00:05:28.246 EAL: Heap on socket 0 was shrunk by 4MB 00:05:28.246 EAL: Trying to obtain current memory policy. 00:05:28.246 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.246 EAL: Restoring previous memory policy: 4 00:05:28.246 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.246 EAL: request: mp_malloc_sync 00:05:28.246 EAL: No shared files mode enabled, IPC is disabled 00:05:28.246 EAL: Heap on socket 0 was expanded by 6MB 00:05:28.246 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.246 EAL: request: mp_malloc_sync 00:05:28.246 EAL: No shared files mode enabled, IPC is disabled 00:05:28.246 EAL: Heap on socket 0 was shrunk by 6MB 00:05:28.246 EAL: Trying to obtain current memory policy. 00:05:28.246 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.246 EAL: Restoring previous memory policy: 4 00:05:28.246 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.246 EAL: request: mp_malloc_sync 00:05:28.246 EAL: No shared files mode enabled, IPC is disabled 00:05:28.246 EAL: Heap on socket 0 was expanded by 10MB 00:05:28.246 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.246 EAL: request: mp_malloc_sync 00:05:28.246 EAL: No shared files mode enabled, IPC is disabled 00:05:28.246 EAL: Heap on socket 0 was shrunk by 10MB 00:05:28.246 EAL: Trying to obtain current memory policy. 00:05:28.246 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.246 EAL: Restoring previous memory policy: 4 00:05:28.246 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.246 EAL: request: mp_malloc_sync 00:05:28.246 EAL: No shared files mode enabled, IPC is disabled 00:05:28.246 EAL: Heap on socket 0 was expanded by 18MB 00:05:28.246 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.246 EAL: request: mp_malloc_sync 00:05:28.246 EAL: No shared files mode enabled, IPC is disabled 00:05:28.246 EAL: Heap on socket 0 was shrunk by 18MB 00:05:28.246 EAL: Trying to obtain current memory policy. 00:05:28.246 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.246 EAL: Restoring previous memory policy: 4 00:05:28.246 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.246 EAL: request: mp_malloc_sync 00:05:28.246 EAL: No shared files mode enabled, IPC is disabled 00:05:28.246 EAL: Heap on socket 0 was expanded by 34MB 00:05:28.246 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.246 EAL: request: mp_malloc_sync 00:05:28.246 EAL: No shared files mode enabled, IPC is disabled 00:05:28.246 EAL: Heap on socket 0 was shrunk by 34MB 00:05:28.246 EAL: Trying to obtain current memory policy. 00:05:28.246 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.246 EAL: Restoring previous memory policy: 4 00:05:28.246 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.246 EAL: request: mp_malloc_sync 00:05:28.246 EAL: No shared files mode enabled, IPC is disabled 00:05:28.246 EAL: Heap on socket 0 was expanded by 66MB 00:05:28.246 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.246 EAL: request: mp_malloc_sync 00:05:28.246 EAL: No shared files mode enabled, IPC is disabled 00:05:28.246 EAL: Heap on socket 0 was shrunk by 66MB 00:05:28.246 EAL: Trying to obtain current memory policy. 00:05:28.246 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.246 EAL: Restoring previous memory policy: 4 00:05:28.246 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.246 EAL: request: mp_malloc_sync 00:05:28.246 EAL: No shared files mode enabled, IPC is disabled 00:05:28.246 EAL: Heap on socket 0 was expanded by 130MB 00:05:28.246 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.505 EAL: request: mp_malloc_sync 00:05:28.505 EAL: No shared files mode enabled, IPC is disabled 00:05:28.505 EAL: Heap on socket 0 was shrunk by 130MB 00:05:28.505 EAL: Trying to obtain current memory policy. 00:05:28.505 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.505 EAL: Restoring previous memory policy: 4 00:05:28.505 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.505 EAL: request: mp_malloc_sync 00:05:28.505 EAL: No shared files mode enabled, IPC is disabled 00:05:28.505 EAL: Heap on socket 0 was expanded by 258MB 00:05:28.505 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.505 EAL: request: mp_malloc_sync 00:05:28.505 EAL: No shared files mode enabled, IPC is disabled 00:05:28.505 EAL: Heap on socket 0 was shrunk by 258MB 00:05:28.505 EAL: Trying to obtain current memory policy. 00:05:28.505 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.764 EAL: Restoring previous memory policy: 4 00:05:28.764 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.764 EAL: request: mp_malloc_sync 00:05:28.764 EAL: No shared files mode enabled, IPC is disabled 00:05:28.764 EAL: Heap on socket 0 was expanded by 514MB 00:05:28.764 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.764 EAL: request: mp_malloc_sync 00:05:28.764 EAL: No shared files mode enabled, IPC is disabled 00:05:28.764 EAL: Heap on socket 0 was shrunk by 514MB 00:05:28.764 EAL: Trying to obtain current memory policy. 00:05:28.764 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.022 EAL: Restoring previous memory policy: 4 00:05:29.022 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.022 EAL: request: mp_malloc_sync 00:05:29.022 EAL: No shared files mode enabled, IPC is disabled 00:05:29.022 EAL: Heap on socket 0 was expanded by 1026MB 00:05:29.281 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.540 passed 00:05:29.540 00:05:29.540 Run Summary: Type Total Ran Passed Failed Inactive 00:05:29.540 suites 1 1 n/a 0 0 00:05:29.540 tests 2 2 2 0 0 00:05:29.540 asserts 5218 5218 5218 0 n/a 00:05:29.540 00:05:29.540 Elapsed time = 1.239 seconds 00:05:29.540 EAL: request: mp_malloc_sync 00:05:29.540 EAL: No shared files mode enabled, IPC is disabled 00:05:29.540 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:29.540 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.540 EAL: request: mp_malloc_sync 00:05:29.540 EAL: No shared files mode enabled, IPC is disabled 00:05:29.540 EAL: Heap on socket 0 was shrunk by 2MB 00:05:29.540 EAL: No shared files mode enabled, IPC is disabled 00:05:29.540 EAL: No shared files mode enabled, IPC is disabled 00:05:29.540 EAL: No shared files mode enabled, IPC is disabled 00:05:29.540 00:05:29.540 real 0m1.437s 00:05:29.540 user 0m0.790s 00:05:29.540 sys 0m0.514s 00:05:29.540 19:26:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:29.540 19:26:16 -- common/autotest_common.sh@10 -- # set +x 00:05:29.540 ************************************ 00:05:29.540 END TEST env_vtophys 00:05:29.540 ************************************ 00:05:29.540 19:26:16 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:29.540 19:26:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:29.540 19:26:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.540 19:26:16 -- common/autotest_common.sh@10 -- # set +x 00:05:29.540 ************************************ 00:05:29.540 START TEST env_pci 00:05:29.540 ************************************ 00:05:29.540 19:26:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:29.540 00:05:29.540 00:05:29.540 CUnit - A unit testing framework for C - Version 2.1-3 00:05:29.540 http://cunit.sourceforge.net/ 00:05:29.540 00:05:29.540 00:05:29.540 Suite: pci 00:05:29.540 Test: pci_hook ...[2024-12-15 19:26:16.378231] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 67138 has claimed it 00:05:29.540 passed 00:05:29.540 00:05:29.540 Run Summary: Type Total Ran Passed Failed Inactive 00:05:29.540 suites 1 1 n/a 0 0 00:05:29.540 tests 1 1 1 0 0 00:05:29.540 asserts 25 25 25 0 n/a 00:05:29.540 00:05:29.540 Elapsed time = 0.002 seconds 00:05:29.540 EAL: Cannot find device (10000:00:01.0) 00:05:29.540 EAL: Failed to attach device on primary process 00:05:29.540 00:05:29.540 real 0m0.021s 00:05:29.540 user 0m0.010s 00:05:29.540 sys 0m0.010s 00:05:29.540 19:26:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:29.540 19:26:16 -- common/autotest_common.sh@10 -- # set +x 00:05:29.540 ************************************ 00:05:29.540 END TEST env_pci 00:05:29.540 ************************************ 00:05:29.540 19:26:16 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:29.540 19:26:16 -- env/env.sh@15 -- # uname 00:05:29.540 19:26:16 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:29.540 19:26:16 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:29.540 19:26:16 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:29.540 19:26:16 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:05:29.540 19:26:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.540 19:26:16 -- common/autotest_common.sh@10 -- # set +x 00:05:29.799 ************************************ 00:05:29.799 START TEST env_dpdk_post_init 00:05:29.799 ************************************ 00:05:29.799 19:26:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:29.799 EAL: Detected CPU lcores: 10 00:05:29.799 EAL: Detected NUMA nodes: 1 00:05:29.799 EAL: Detected shared linkage of DPDK 00:05:29.799 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:29.799 EAL: Selected IOVA mode 'PA' 00:05:29.799 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:29.799 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:05:29.799 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:05:29.799 Starting DPDK initialization... 00:05:29.799 Starting SPDK post initialization... 00:05:29.799 SPDK NVMe probe 00:05:29.799 Attaching to 0000:00:06.0 00:05:29.799 Attaching to 0000:00:07.0 00:05:29.799 Attached to 0000:00:06.0 00:05:29.799 Attached to 0000:00:07.0 00:05:29.799 Cleaning up... 00:05:29.799 00:05:29.799 real 0m0.163s 00:05:29.799 user 0m0.041s 00:05:29.799 sys 0m0.023s 00:05:29.799 19:26:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:29.799 19:26:16 -- common/autotest_common.sh@10 -- # set +x 00:05:29.799 ************************************ 00:05:29.799 END TEST env_dpdk_post_init 00:05:29.799 ************************************ 00:05:29.799 19:26:16 -- env/env.sh@26 -- # uname 00:05:29.799 19:26:16 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:29.799 19:26:16 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:29.799 19:26:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:29.799 19:26:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.799 19:26:16 -- common/autotest_common.sh@10 -- # set +x 00:05:29.799 ************************************ 00:05:29.799 START TEST env_mem_callbacks 00:05:29.799 ************************************ 00:05:29.799 19:26:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:29.799 EAL: Detected CPU lcores: 10 00:05:29.799 EAL: Detected NUMA nodes: 1 00:05:29.799 EAL: Detected shared linkage of DPDK 00:05:29.799 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:30.058 EAL: Selected IOVA mode 'PA' 00:05:30.058 00:05:30.058 00:05:30.058 CUnit - A unit testing framework for C - Version 2.1-3 00:05:30.058 http://cunit.sourceforge.net/ 00:05:30.058 00:05:30.058 00:05:30.058 Suite: memory 00:05:30.058 Test: test ... 00:05:30.058 register 0x200000200000 2097152 00:05:30.058 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:30.058 malloc 3145728 00:05:30.058 register 0x200000400000 4194304 00:05:30.058 buf 0x200000500000 len 3145728 PASSED 00:05:30.058 malloc 64 00:05:30.058 buf 0x2000004fff40 len 64 PASSED 00:05:30.058 malloc 4194304 00:05:30.058 register 0x200000800000 6291456 00:05:30.058 buf 0x200000a00000 len 4194304 PASSED 00:05:30.058 free 0x200000500000 3145728 00:05:30.058 free 0x2000004fff40 64 00:05:30.058 unregister 0x200000400000 4194304 PASSED 00:05:30.058 free 0x200000a00000 4194304 00:05:30.058 unregister 0x200000800000 6291456 PASSED 00:05:30.058 malloc 8388608 00:05:30.058 register 0x200000400000 10485760 00:05:30.058 buf 0x200000600000 len 8388608 PASSED 00:05:30.058 free 0x200000600000 8388608 00:05:30.058 unregister 0x200000400000 10485760 PASSED 00:05:30.058 passed 00:05:30.058 00:05:30.058 Run Summary: Type Total Ran Passed Failed Inactive 00:05:30.058 suites 1 1 n/a 0 0 00:05:30.058 tests 1 1 1 0 0 00:05:30.058 asserts 15 15 15 0 n/a 00:05:30.058 00:05:30.058 Elapsed time = 0.008 seconds 00:05:30.058 00:05:30.058 real 0m0.142s 00:05:30.058 user 0m0.015s 00:05:30.058 sys 0m0.026s 00:05:30.058 19:26:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:30.058 19:26:16 -- common/autotest_common.sh@10 -- # set +x 00:05:30.058 ************************************ 00:05:30.058 END TEST env_mem_callbacks 00:05:30.058 ************************************ 00:05:30.058 00:05:30.058 real 0m2.465s 00:05:30.058 user 0m1.290s 00:05:30.058 sys 0m0.826s 00:05:30.058 19:26:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:30.059 19:26:16 -- common/autotest_common.sh@10 -- # set +x 00:05:30.059 ************************************ 00:05:30.059 END TEST env 00:05:30.059 ************************************ 00:05:30.059 19:26:16 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:30.059 19:26:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:30.059 19:26:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:30.059 19:26:16 -- common/autotest_common.sh@10 -- # set +x 00:05:30.059 ************************************ 00:05:30.059 START TEST rpc 00:05:30.059 ************************************ 00:05:30.059 19:26:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:30.318 * Looking for test storage... 00:05:30.318 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:30.318 19:26:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:30.318 19:26:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:30.318 19:26:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:30.318 19:26:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:30.318 19:26:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:30.318 19:26:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:30.318 19:26:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:30.318 19:26:17 -- scripts/common.sh@335 -- # IFS=.-: 00:05:30.318 19:26:17 -- scripts/common.sh@335 -- # read -ra ver1 00:05:30.318 19:26:17 -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.318 19:26:17 -- scripts/common.sh@336 -- # read -ra ver2 00:05:30.318 19:26:17 -- scripts/common.sh@337 -- # local 'op=<' 00:05:30.318 19:26:17 -- scripts/common.sh@339 -- # ver1_l=2 00:05:30.318 19:26:17 -- scripts/common.sh@340 -- # ver2_l=1 00:05:30.318 19:26:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:30.318 19:26:17 -- scripts/common.sh@343 -- # case "$op" in 00:05:30.318 19:26:17 -- scripts/common.sh@344 -- # : 1 00:05:30.318 19:26:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:30.318 19:26:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.318 19:26:17 -- scripts/common.sh@364 -- # decimal 1 00:05:30.318 19:26:17 -- scripts/common.sh@352 -- # local d=1 00:05:30.318 19:26:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.318 19:26:17 -- scripts/common.sh@354 -- # echo 1 00:05:30.318 19:26:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:30.318 19:26:17 -- scripts/common.sh@365 -- # decimal 2 00:05:30.318 19:26:17 -- scripts/common.sh@352 -- # local d=2 00:05:30.318 19:26:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.318 19:26:17 -- scripts/common.sh@354 -- # echo 2 00:05:30.318 19:26:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:30.318 19:26:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:30.318 19:26:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:30.318 19:26:17 -- scripts/common.sh@367 -- # return 0 00:05:30.318 19:26:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.318 19:26:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:30.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.318 --rc genhtml_branch_coverage=1 00:05:30.318 --rc genhtml_function_coverage=1 00:05:30.318 --rc genhtml_legend=1 00:05:30.318 --rc geninfo_all_blocks=1 00:05:30.318 --rc geninfo_unexecuted_blocks=1 00:05:30.318 00:05:30.318 ' 00:05:30.318 19:26:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:30.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.318 --rc genhtml_branch_coverage=1 00:05:30.318 --rc genhtml_function_coverage=1 00:05:30.318 --rc genhtml_legend=1 00:05:30.318 --rc geninfo_all_blocks=1 00:05:30.318 --rc geninfo_unexecuted_blocks=1 00:05:30.318 00:05:30.318 ' 00:05:30.318 19:26:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:30.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.318 --rc genhtml_branch_coverage=1 00:05:30.318 --rc genhtml_function_coverage=1 00:05:30.318 --rc genhtml_legend=1 00:05:30.318 --rc geninfo_all_blocks=1 00:05:30.318 --rc geninfo_unexecuted_blocks=1 00:05:30.318 00:05:30.318 ' 00:05:30.318 19:26:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:30.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.318 --rc genhtml_branch_coverage=1 00:05:30.318 --rc genhtml_function_coverage=1 00:05:30.318 --rc genhtml_legend=1 00:05:30.318 --rc geninfo_all_blocks=1 00:05:30.318 --rc geninfo_unexecuted_blocks=1 00:05:30.318 00:05:30.318 ' 00:05:30.318 19:26:17 -- rpc/rpc.sh@65 -- # spdk_pid=67259 00:05:30.318 19:26:17 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:30.318 19:26:17 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:30.318 19:26:17 -- rpc/rpc.sh@67 -- # waitforlisten 67259 00:05:30.318 19:26:17 -- common/autotest_common.sh@829 -- # '[' -z 67259 ']' 00:05:30.318 19:26:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.318 19:26:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.318 19:26:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.318 19:26:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.318 19:26:17 -- common/autotest_common.sh@10 -- # set +x 00:05:30.318 [2024-12-15 19:26:17.154383] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:30.318 [2024-12-15 19:26:17.154712] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67259 ] 00:05:30.577 [2024-12-15 19:26:17.292328] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.577 [2024-12-15 19:26:17.354874] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:30.577 [2024-12-15 19:26:17.355027] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:30.577 [2024-12-15 19:26:17.355041] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 67259' to capture a snapshot of events at runtime. 00:05:30.577 [2024-12-15 19:26:17.355049] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid67259 for offline analysis/debug. 00:05:30.577 [2024-12-15 19:26:17.355106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.513 19:26:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:31.513 19:26:18 -- common/autotest_common.sh@862 -- # return 0 00:05:31.513 19:26:18 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:31.513 19:26:18 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:31.513 19:26:18 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:31.513 19:26:18 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:31.513 19:26:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:31.513 19:26:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.513 19:26:18 -- common/autotest_common.sh@10 -- # set +x 00:05:31.513 ************************************ 00:05:31.513 START TEST rpc_integrity 00:05:31.513 ************************************ 00:05:31.513 19:26:18 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:31.513 19:26:18 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:31.513 19:26:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.513 19:26:18 -- common/autotest_common.sh@10 -- # set +x 00:05:31.513 19:26:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.513 19:26:18 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:31.513 19:26:18 -- rpc/rpc.sh@13 -- # jq length 00:05:31.513 19:26:18 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:31.513 19:26:18 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:31.513 19:26:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.514 19:26:18 -- common/autotest_common.sh@10 -- # set +x 00:05:31.514 19:26:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.514 19:26:18 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:31.514 19:26:18 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:31.514 19:26:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.514 19:26:18 -- common/autotest_common.sh@10 -- # set +x 00:05:31.514 19:26:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.514 19:26:18 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:31.514 { 00:05:31.514 "aliases": [ 00:05:31.514 "ce3de404-779a-4914-98f6-8f4e090ec5b3" 00:05:31.514 ], 00:05:31.514 "assigned_rate_limits": { 00:05:31.514 "r_mbytes_per_sec": 0, 00:05:31.514 "rw_ios_per_sec": 0, 00:05:31.514 "rw_mbytes_per_sec": 0, 00:05:31.514 "w_mbytes_per_sec": 0 00:05:31.514 }, 00:05:31.514 "block_size": 512, 00:05:31.514 "claimed": false, 00:05:31.514 "driver_specific": {}, 00:05:31.514 "memory_domains": [ 00:05:31.514 { 00:05:31.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.514 "dma_device_type": 2 00:05:31.514 } 00:05:31.514 ], 00:05:31.514 "name": "Malloc0", 00:05:31.514 "num_blocks": 16384, 00:05:31.514 "product_name": "Malloc disk", 00:05:31.514 "supported_io_types": { 00:05:31.514 "abort": true, 00:05:31.514 "compare": false, 00:05:31.514 "compare_and_write": false, 00:05:31.514 "flush": true, 00:05:31.514 "nvme_admin": false, 00:05:31.514 "nvme_io": false, 00:05:31.514 "read": true, 00:05:31.514 "reset": true, 00:05:31.514 "unmap": true, 00:05:31.514 "write": true, 00:05:31.514 "write_zeroes": true 00:05:31.514 }, 00:05:31.514 "uuid": "ce3de404-779a-4914-98f6-8f4e090ec5b3", 00:05:31.514 "zoned": false 00:05:31.514 } 00:05:31.514 ]' 00:05:31.514 19:26:18 -- rpc/rpc.sh@17 -- # jq length 00:05:31.514 19:26:18 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:31.514 19:26:18 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:31.514 19:26:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.514 19:26:18 -- common/autotest_common.sh@10 -- # set +x 00:05:31.514 [2024-12-15 19:26:18.299225] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:31.514 [2024-12-15 19:26:18.299274] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:31.514 [2024-12-15 19:26:18.299289] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b91490 00:05:31.514 [2024-12-15 19:26:18.299297] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:31.514 [2024-12-15 19:26:18.300592] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:31.514 [2024-12-15 19:26:18.300624] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:31.514 Passthru0 00:05:31.514 19:26:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.514 19:26:18 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:31.514 19:26:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.514 19:26:18 -- common/autotest_common.sh@10 -- # set +x 00:05:31.514 19:26:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.514 19:26:18 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:31.514 { 00:05:31.514 "aliases": [ 00:05:31.514 "ce3de404-779a-4914-98f6-8f4e090ec5b3" 00:05:31.514 ], 00:05:31.514 "assigned_rate_limits": { 00:05:31.514 "r_mbytes_per_sec": 0, 00:05:31.514 "rw_ios_per_sec": 0, 00:05:31.514 "rw_mbytes_per_sec": 0, 00:05:31.514 "w_mbytes_per_sec": 0 00:05:31.514 }, 00:05:31.514 "block_size": 512, 00:05:31.514 "claim_type": "exclusive_write", 00:05:31.514 "claimed": true, 00:05:31.514 "driver_specific": {}, 00:05:31.514 "memory_domains": [ 00:05:31.514 { 00:05:31.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.514 "dma_device_type": 2 00:05:31.514 } 00:05:31.514 ], 00:05:31.514 "name": "Malloc0", 00:05:31.514 "num_blocks": 16384, 00:05:31.514 "product_name": "Malloc disk", 00:05:31.514 "supported_io_types": { 00:05:31.514 "abort": true, 00:05:31.514 "compare": false, 00:05:31.514 "compare_and_write": false, 00:05:31.514 "flush": true, 00:05:31.514 "nvme_admin": false, 00:05:31.514 "nvme_io": false, 00:05:31.514 "read": true, 00:05:31.514 "reset": true, 00:05:31.514 "unmap": true, 00:05:31.514 "write": true, 00:05:31.514 "write_zeroes": true 00:05:31.514 }, 00:05:31.514 "uuid": "ce3de404-779a-4914-98f6-8f4e090ec5b3", 00:05:31.514 "zoned": false 00:05:31.514 }, 00:05:31.514 { 00:05:31.514 "aliases": [ 00:05:31.514 "377e0a3a-6959-5701-811d-3b168ce4a7db" 00:05:31.514 ], 00:05:31.514 "assigned_rate_limits": { 00:05:31.514 "r_mbytes_per_sec": 0, 00:05:31.514 "rw_ios_per_sec": 0, 00:05:31.514 "rw_mbytes_per_sec": 0, 00:05:31.514 "w_mbytes_per_sec": 0 00:05:31.514 }, 00:05:31.514 "block_size": 512, 00:05:31.514 "claimed": false, 00:05:31.514 "driver_specific": { 00:05:31.514 "passthru": { 00:05:31.514 "base_bdev_name": "Malloc0", 00:05:31.514 "name": "Passthru0" 00:05:31.514 } 00:05:31.514 }, 00:05:31.514 "memory_domains": [ 00:05:31.514 { 00:05:31.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.514 "dma_device_type": 2 00:05:31.514 } 00:05:31.514 ], 00:05:31.514 "name": "Passthru0", 00:05:31.514 "num_blocks": 16384, 00:05:31.514 "product_name": "passthru", 00:05:31.514 "supported_io_types": { 00:05:31.514 "abort": true, 00:05:31.514 "compare": false, 00:05:31.514 "compare_and_write": false, 00:05:31.514 "flush": true, 00:05:31.514 "nvme_admin": false, 00:05:31.514 "nvme_io": false, 00:05:31.514 "read": true, 00:05:31.514 "reset": true, 00:05:31.514 "unmap": true, 00:05:31.514 "write": true, 00:05:31.514 "write_zeroes": true 00:05:31.514 }, 00:05:31.514 "uuid": "377e0a3a-6959-5701-811d-3b168ce4a7db", 00:05:31.514 "zoned": false 00:05:31.514 } 00:05:31.514 ]' 00:05:31.514 19:26:18 -- rpc/rpc.sh@21 -- # jq length 00:05:31.514 19:26:18 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:31.514 19:26:18 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:31.514 19:26:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.514 19:26:18 -- common/autotest_common.sh@10 -- # set +x 00:05:31.514 19:26:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.514 19:26:18 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:31.514 19:26:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.514 19:26:18 -- common/autotest_common.sh@10 -- # set +x 00:05:31.514 19:26:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.514 19:26:18 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:31.514 19:26:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.514 19:26:18 -- common/autotest_common.sh@10 -- # set +x 00:05:31.514 19:26:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.514 19:26:18 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:31.514 19:26:18 -- rpc/rpc.sh@26 -- # jq length 00:05:31.773 ************************************ 00:05:31.773 END TEST rpc_integrity 00:05:31.773 ************************************ 00:05:31.773 19:26:18 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:31.773 00:05:31.773 real 0m0.314s 00:05:31.773 user 0m0.200s 00:05:31.773 sys 0m0.037s 00:05:31.773 19:26:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:31.773 19:26:18 -- common/autotest_common.sh@10 -- # set +x 00:05:31.773 19:26:18 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:31.773 19:26:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:31.773 19:26:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.773 19:26:18 -- common/autotest_common.sh@10 -- # set +x 00:05:31.773 ************************************ 00:05:31.773 START TEST rpc_plugins 00:05:31.773 ************************************ 00:05:31.773 19:26:18 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:05:31.773 19:26:18 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:31.773 19:26:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.773 19:26:18 -- common/autotest_common.sh@10 -- # set +x 00:05:31.773 19:26:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.773 19:26:18 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:31.773 19:26:18 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:31.773 19:26:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.773 19:26:18 -- common/autotest_common.sh@10 -- # set +x 00:05:31.773 19:26:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.773 19:26:18 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:31.773 { 00:05:31.773 "aliases": [ 00:05:31.773 "08839b09-c493-402a-ac32-eedaa482c813" 00:05:31.773 ], 00:05:31.773 "assigned_rate_limits": { 00:05:31.773 "r_mbytes_per_sec": 0, 00:05:31.773 "rw_ios_per_sec": 0, 00:05:31.773 "rw_mbytes_per_sec": 0, 00:05:31.773 "w_mbytes_per_sec": 0 00:05:31.773 }, 00:05:31.773 "block_size": 4096, 00:05:31.773 "claimed": false, 00:05:31.773 "driver_specific": {}, 00:05:31.773 "memory_domains": [ 00:05:31.773 { 00:05:31.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.773 "dma_device_type": 2 00:05:31.773 } 00:05:31.773 ], 00:05:31.773 "name": "Malloc1", 00:05:31.773 "num_blocks": 256, 00:05:31.773 "product_name": "Malloc disk", 00:05:31.773 "supported_io_types": { 00:05:31.773 "abort": true, 00:05:31.773 "compare": false, 00:05:31.773 "compare_and_write": false, 00:05:31.773 "flush": true, 00:05:31.773 "nvme_admin": false, 00:05:31.773 "nvme_io": false, 00:05:31.773 "read": true, 00:05:31.773 "reset": true, 00:05:31.773 "unmap": true, 00:05:31.773 "write": true, 00:05:31.773 "write_zeroes": true 00:05:31.773 }, 00:05:31.773 "uuid": "08839b09-c493-402a-ac32-eedaa482c813", 00:05:31.773 "zoned": false 00:05:31.773 } 00:05:31.773 ]' 00:05:31.773 19:26:18 -- rpc/rpc.sh@32 -- # jq length 00:05:31.773 19:26:18 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:31.773 19:26:18 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:31.773 19:26:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.773 19:26:18 -- common/autotest_common.sh@10 -- # set +x 00:05:31.773 19:26:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.773 19:26:18 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:31.773 19:26:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.773 19:26:18 -- common/autotest_common.sh@10 -- # set +x 00:05:31.773 19:26:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.773 19:26:18 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:31.773 19:26:18 -- rpc/rpc.sh@36 -- # jq length 00:05:32.032 ************************************ 00:05:32.032 END TEST rpc_plugins 00:05:32.032 ************************************ 00:05:32.032 19:26:18 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:32.032 00:05:32.032 real 0m0.158s 00:05:32.032 user 0m0.105s 00:05:32.032 sys 0m0.018s 00:05:32.032 19:26:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:32.032 19:26:18 -- common/autotest_common.sh@10 -- # set +x 00:05:32.032 19:26:18 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:32.032 19:26:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:32.032 19:26:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.032 19:26:18 -- common/autotest_common.sh@10 -- # set +x 00:05:32.032 ************************************ 00:05:32.032 START TEST rpc_trace_cmd_test 00:05:32.032 ************************************ 00:05:32.032 19:26:18 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:05:32.032 19:26:18 -- rpc/rpc.sh@40 -- # local info 00:05:32.032 19:26:18 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:32.032 19:26:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.032 19:26:18 -- common/autotest_common.sh@10 -- # set +x 00:05:32.032 19:26:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.032 19:26:18 -- rpc/rpc.sh@42 -- # info='{ 00:05:32.032 "bdev": { 00:05:32.032 "mask": "0x8", 00:05:32.032 "tpoint_mask": "0xffffffffffffffff" 00:05:32.032 }, 00:05:32.032 "bdev_nvme": { 00:05:32.032 "mask": "0x4000", 00:05:32.032 "tpoint_mask": "0x0" 00:05:32.032 }, 00:05:32.032 "blobfs": { 00:05:32.032 "mask": "0x80", 00:05:32.032 "tpoint_mask": "0x0" 00:05:32.032 }, 00:05:32.032 "dsa": { 00:05:32.032 "mask": "0x200", 00:05:32.032 "tpoint_mask": "0x0" 00:05:32.032 }, 00:05:32.032 "ftl": { 00:05:32.032 "mask": "0x40", 00:05:32.032 "tpoint_mask": "0x0" 00:05:32.032 }, 00:05:32.032 "iaa": { 00:05:32.032 "mask": "0x1000", 00:05:32.032 "tpoint_mask": "0x0" 00:05:32.032 }, 00:05:32.032 "iscsi_conn": { 00:05:32.032 "mask": "0x2", 00:05:32.032 "tpoint_mask": "0x0" 00:05:32.032 }, 00:05:32.032 "nvme_pcie": { 00:05:32.032 "mask": "0x800", 00:05:32.032 "tpoint_mask": "0x0" 00:05:32.032 }, 00:05:32.032 "nvme_tcp": { 00:05:32.032 "mask": "0x2000", 00:05:32.032 "tpoint_mask": "0x0" 00:05:32.032 }, 00:05:32.032 "nvmf_rdma": { 00:05:32.032 "mask": "0x10", 00:05:32.032 "tpoint_mask": "0x0" 00:05:32.032 }, 00:05:32.032 "nvmf_tcp": { 00:05:32.032 "mask": "0x20", 00:05:32.032 "tpoint_mask": "0x0" 00:05:32.032 }, 00:05:32.032 "scsi": { 00:05:32.032 "mask": "0x4", 00:05:32.033 "tpoint_mask": "0x0" 00:05:32.033 }, 00:05:32.033 "thread": { 00:05:32.033 "mask": "0x400", 00:05:32.033 "tpoint_mask": "0x0" 00:05:32.033 }, 00:05:32.033 "tpoint_group_mask": "0x8", 00:05:32.033 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid67259" 00:05:32.033 }' 00:05:32.033 19:26:18 -- rpc/rpc.sh@43 -- # jq length 00:05:32.033 19:26:18 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:32.033 19:26:18 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:32.033 19:26:18 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:32.033 19:26:18 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:32.033 19:26:18 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:32.033 19:26:18 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:32.291 19:26:18 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:32.291 19:26:18 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:32.291 ************************************ 00:05:32.291 END TEST rpc_trace_cmd_test 00:05:32.291 ************************************ 00:05:32.291 19:26:19 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:32.291 00:05:32.292 real 0m0.287s 00:05:32.292 user 0m0.249s 00:05:32.292 sys 0m0.028s 00:05:32.292 19:26:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:32.292 19:26:19 -- common/autotest_common.sh@10 -- # set +x 00:05:32.292 19:26:19 -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:05:32.292 19:26:19 -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:05:32.292 19:26:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:32.292 19:26:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.292 19:26:19 -- common/autotest_common.sh@10 -- # set +x 00:05:32.292 ************************************ 00:05:32.292 START TEST go_rpc 00:05:32.292 ************************************ 00:05:32.292 19:26:19 -- common/autotest_common.sh@1114 -- # go_rpc 00:05:32.292 19:26:19 -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:32.292 19:26:19 -- rpc/rpc.sh@51 -- # bdevs='[]' 00:05:32.292 19:26:19 -- rpc/rpc.sh@52 -- # jq length 00:05:32.292 19:26:19 -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:05:32.292 19:26:19 -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:05:32.292 19:26:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.292 19:26:19 -- common/autotest_common.sh@10 -- # set +x 00:05:32.292 19:26:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.292 19:26:19 -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:05:32.292 19:26:19 -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:32.292 19:26:19 -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["f23affe9-cfa4-4787-8815-2619a800ae88"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"f23affe9-cfa4-4787-8815-2619a800ae88","zoned":false}]' 00:05:32.292 19:26:19 -- rpc/rpc.sh@57 -- # jq length 00:05:32.550 19:26:19 -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:05:32.550 19:26:19 -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:32.551 19:26:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.551 19:26:19 -- common/autotest_common.sh@10 -- # set +x 00:05:32.551 19:26:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.551 19:26:19 -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:32.551 19:26:19 -- rpc/rpc.sh@60 -- # bdevs='[]' 00:05:32.551 19:26:19 -- rpc/rpc.sh@61 -- # jq length 00:05:32.551 19:26:19 -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:05:32.551 00:05:32.551 real 0m0.217s 00:05:32.551 user 0m0.149s 00:05:32.551 sys 0m0.036s 00:05:32.551 19:26:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:32.551 19:26:19 -- common/autotest_common.sh@10 -- # set +x 00:05:32.551 ************************************ 00:05:32.551 END TEST go_rpc 00:05:32.551 ************************************ 00:05:32.551 19:26:19 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:32.551 19:26:19 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:32.551 19:26:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:32.551 19:26:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.551 19:26:19 -- common/autotest_common.sh@10 -- # set +x 00:05:32.551 ************************************ 00:05:32.551 START TEST rpc_daemon_integrity 00:05:32.551 ************************************ 00:05:32.551 19:26:19 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:32.551 19:26:19 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:32.551 19:26:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.551 19:26:19 -- common/autotest_common.sh@10 -- # set +x 00:05:32.551 19:26:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.551 19:26:19 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:32.551 19:26:19 -- rpc/rpc.sh@13 -- # jq length 00:05:32.551 19:26:19 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:32.551 19:26:19 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:32.551 19:26:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.551 19:26:19 -- common/autotest_common.sh@10 -- # set +x 00:05:32.551 19:26:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.551 19:26:19 -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:05:32.551 19:26:19 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:32.551 19:26:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.551 19:26:19 -- common/autotest_common.sh@10 -- # set +x 00:05:32.551 19:26:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.551 19:26:19 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:32.551 { 00:05:32.551 "aliases": [ 00:05:32.551 "6fd92009-ea04-46f1-8ca1-b3b8785bda26" 00:05:32.551 ], 00:05:32.551 "assigned_rate_limits": { 00:05:32.551 "r_mbytes_per_sec": 0, 00:05:32.551 "rw_ios_per_sec": 0, 00:05:32.551 "rw_mbytes_per_sec": 0, 00:05:32.551 "w_mbytes_per_sec": 0 00:05:32.551 }, 00:05:32.551 "block_size": 512, 00:05:32.551 "claimed": false, 00:05:32.551 "driver_specific": {}, 00:05:32.551 "memory_domains": [ 00:05:32.551 { 00:05:32.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.551 "dma_device_type": 2 00:05:32.551 } 00:05:32.551 ], 00:05:32.551 "name": "Malloc3", 00:05:32.551 "num_blocks": 16384, 00:05:32.551 "product_name": "Malloc disk", 00:05:32.551 "supported_io_types": { 00:05:32.551 "abort": true, 00:05:32.551 "compare": false, 00:05:32.551 "compare_and_write": false, 00:05:32.551 "flush": true, 00:05:32.551 "nvme_admin": false, 00:05:32.551 "nvme_io": false, 00:05:32.551 "read": true, 00:05:32.551 "reset": true, 00:05:32.551 "unmap": true, 00:05:32.551 "write": true, 00:05:32.551 "write_zeroes": true 00:05:32.551 }, 00:05:32.551 "uuid": "6fd92009-ea04-46f1-8ca1-b3b8785bda26", 00:05:32.551 "zoned": false 00:05:32.551 } 00:05:32.551 ]' 00:05:32.551 19:26:19 -- rpc/rpc.sh@17 -- # jq length 00:05:32.810 19:26:19 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:32.810 19:26:19 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:05:32.810 19:26:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.810 19:26:19 -- common/autotest_common.sh@10 -- # set +x 00:05:32.810 [2024-12-15 19:26:19.491599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:32.810 [2024-12-15 19:26:19.491632] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:32.810 [2024-12-15 19:26:19.491646] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19e41d0 00:05:32.810 [2024-12-15 19:26:19.491654] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:32.810 [2024-12-15 19:26:19.492656] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:32.810 [2024-12-15 19:26:19.492683] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:32.810 Passthru0 00:05:32.810 19:26:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.810 19:26:19 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:32.810 19:26:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.810 19:26:19 -- common/autotest_common.sh@10 -- # set +x 00:05:32.810 19:26:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.810 19:26:19 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:32.810 { 00:05:32.810 "aliases": [ 00:05:32.810 "6fd92009-ea04-46f1-8ca1-b3b8785bda26" 00:05:32.810 ], 00:05:32.810 "assigned_rate_limits": { 00:05:32.810 "r_mbytes_per_sec": 0, 00:05:32.810 "rw_ios_per_sec": 0, 00:05:32.810 "rw_mbytes_per_sec": 0, 00:05:32.810 "w_mbytes_per_sec": 0 00:05:32.810 }, 00:05:32.810 "block_size": 512, 00:05:32.810 "claim_type": "exclusive_write", 00:05:32.810 "claimed": true, 00:05:32.810 "driver_specific": {}, 00:05:32.810 "memory_domains": [ 00:05:32.810 { 00:05:32.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.810 "dma_device_type": 2 00:05:32.810 } 00:05:32.810 ], 00:05:32.810 "name": "Malloc3", 00:05:32.810 "num_blocks": 16384, 00:05:32.810 "product_name": "Malloc disk", 00:05:32.810 "supported_io_types": { 00:05:32.810 "abort": true, 00:05:32.810 "compare": false, 00:05:32.810 "compare_and_write": false, 00:05:32.810 "flush": true, 00:05:32.810 "nvme_admin": false, 00:05:32.810 "nvme_io": false, 00:05:32.810 "read": true, 00:05:32.810 "reset": true, 00:05:32.810 "unmap": true, 00:05:32.810 "write": true, 00:05:32.810 "write_zeroes": true 00:05:32.810 }, 00:05:32.810 "uuid": "6fd92009-ea04-46f1-8ca1-b3b8785bda26", 00:05:32.810 "zoned": false 00:05:32.810 }, 00:05:32.810 { 00:05:32.810 "aliases": [ 00:05:32.810 "8c025a96-9683-591e-807c-d8e6a64fde88" 00:05:32.810 ], 00:05:32.810 "assigned_rate_limits": { 00:05:32.810 "r_mbytes_per_sec": 0, 00:05:32.810 "rw_ios_per_sec": 0, 00:05:32.810 "rw_mbytes_per_sec": 0, 00:05:32.810 "w_mbytes_per_sec": 0 00:05:32.810 }, 00:05:32.810 "block_size": 512, 00:05:32.810 "claimed": false, 00:05:32.810 "driver_specific": { 00:05:32.810 "passthru": { 00:05:32.810 "base_bdev_name": "Malloc3", 00:05:32.810 "name": "Passthru0" 00:05:32.810 } 00:05:32.810 }, 00:05:32.810 "memory_domains": [ 00:05:32.810 { 00:05:32.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.810 "dma_device_type": 2 00:05:32.810 } 00:05:32.810 ], 00:05:32.810 "name": "Passthru0", 00:05:32.810 "num_blocks": 16384, 00:05:32.810 "product_name": "passthru", 00:05:32.810 "supported_io_types": { 00:05:32.810 "abort": true, 00:05:32.810 "compare": false, 00:05:32.810 "compare_and_write": false, 00:05:32.810 "flush": true, 00:05:32.810 "nvme_admin": false, 00:05:32.810 "nvme_io": false, 00:05:32.810 "read": true, 00:05:32.810 "reset": true, 00:05:32.810 "unmap": true, 00:05:32.810 "write": true, 00:05:32.810 "write_zeroes": true 00:05:32.810 }, 00:05:32.810 "uuid": "8c025a96-9683-591e-807c-d8e6a64fde88", 00:05:32.810 "zoned": false 00:05:32.810 } 00:05:32.810 ]' 00:05:32.810 19:26:19 -- rpc/rpc.sh@21 -- # jq length 00:05:32.810 19:26:19 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:32.810 19:26:19 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:32.811 19:26:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.811 19:26:19 -- common/autotest_common.sh@10 -- # set +x 00:05:32.811 19:26:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.811 19:26:19 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:05:32.811 19:26:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.811 19:26:19 -- common/autotest_common.sh@10 -- # set +x 00:05:32.811 19:26:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.811 19:26:19 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:32.811 19:26:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.811 19:26:19 -- common/autotest_common.sh@10 -- # set +x 00:05:32.811 19:26:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.811 19:26:19 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:32.811 19:26:19 -- rpc/rpc.sh@26 -- # jq length 00:05:32.811 ************************************ 00:05:32.811 END TEST rpc_daemon_integrity 00:05:32.811 ************************************ 00:05:32.811 19:26:19 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:32.811 00:05:32.811 real 0m0.329s 00:05:32.811 user 0m0.211s 00:05:32.811 sys 0m0.047s 00:05:32.811 19:26:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:32.811 19:26:19 -- common/autotest_common.sh@10 -- # set +x 00:05:33.070 19:26:19 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:33.070 19:26:19 -- rpc/rpc.sh@84 -- # killprocess 67259 00:05:33.070 19:26:19 -- common/autotest_common.sh@936 -- # '[' -z 67259 ']' 00:05:33.070 19:26:19 -- common/autotest_common.sh@940 -- # kill -0 67259 00:05:33.070 19:26:19 -- common/autotest_common.sh@941 -- # uname 00:05:33.070 19:26:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:33.070 19:26:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67259 00:05:33.070 killing process with pid 67259 00:05:33.070 19:26:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:33.070 19:26:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:33.070 19:26:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67259' 00:05:33.070 19:26:19 -- common/autotest_common.sh@955 -- # kill 67259 00:05:33.070 19:26:19 -- common/autotest_common.sh@960 -- # wait 67259 00:05:33.329 00:05:33.329 real 0m3.201s 00:05:33.329 user 0m4.188s 00:05:33.329 sys 0m0.771s 00:05:33.329 19:26:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:33.329 ************************************ 00:05:33.329 END TEST rpc 00:05:33.329 ************************************ 00:05:33.329 19:26:20 -- common/autotest_common.sh@10 -- # set +x 00:05:33.329 19:26:20 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:33.329 19:26:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:33.329 19:26:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.329 19:26:20 -- common/autotest_common.sh@10 -- # set +x 00:05:33.329 ************************************ 00:05:33.329 START TEST rpc_client 00:05:33.329 ************************************ 00:05:33.329 19:26:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:33.329 * Looking for test storage... 00:05:33.588 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:33.588 19:26:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:33.588 19:26:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:33.588 19:26:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:33.588 19:26:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:33.588 19:26:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:33.588 19:26:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:33.589 19:26:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:33.589 19:26:20 -- scripts/common.sh@335 -- # IFS=.-: 00:05:33.589 19:26:20 -- scripts/common.sh@335 -- # read -ra ver1 00:05:33.589 19:26:20 -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.589 19:26:20 -- scripts/common.sh@336 -- # read -ra ver2 00:05:33.589 19:26:20 -- scripts/common.sh@337 -- # local 'op=<' 00:05:33.589 19:26:20 -- scripts/common.sh@339 -- # ver1_l=2 00:05:33.589 19:26:20 -- scripts/common.sh@340 -- # ver2_l=1 00:05:33.589 19:26:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:33.589 19:26:20 -- scripts/common.sh@343 -- # case "$op" in 00:05:33.589 19:26:20 -- scripts/common.sh@344 -- # : 1 00:05:33.589 19:26:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:33.589 19:26:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.589 19:26:20 -- scripts/common.sh@364 -- # decimal 1 00:05:33.589 19:26:20 -- scripts/common.sh@352 -- # local d=1 00:05:33.589 19:26:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.589 19:26:20 -- scripts/common.sh@354 -- # echo 1 00:05:33.589 19:26:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:33.589 19:26:20 -- scripts/common.sh@365 -- # decimal 2 00:05:33.589 19:26:20 -- scripts/common.sh@352 -- # local d=2 00:05:33.589 19:26:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.589 19:26:20 -- scripts/common.sh@354 -- # echo 2 00:05:33.589 19:26:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:33.589 19:26:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:33.589 19:26:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:33.589 19:26:20 -- scripts/common.sh@367 -- # return 0 00:05:33.589 19:26:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.589 19:26:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:33.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.589 --rc genhtml_branch_coverage=1 00:05:33.589 --rc genhtml_function_coverage=1 00:05:33.589 --rc genhtml_legend=1 00:05:33.589 --rc geninfo_all_blocks=1 00:05:33.589 --rc geninfo_unexecuted_blocks=1 00:05:33.589 00:05:33.589 ' 00:05:33.589 19:26:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:33.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.589 --rc genhtml_branch_coverage=1 00:05:33.589 --rc genhtml_function_coverage=1 00:05:33.589 --rc genhtml_legend=1 00:05:33.589 --rc geninfo_all_blocks=1 00:05:33.589 --rc geninfo_unexecuted_blocks=1 00:05:33.589 00:05:33.589 ' 00:05:33.589 19:26:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:33.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.589 --rc genhtml_branch_coverage=1 00:05:33.589 --rc genhtml_function_coverage=1 00:05:33.589 --rc genhtml_legend=1 00:05:33.589 --rc geninfo_all_blocks=1 00:05:33.589 --rc geninfo_unexecuted_blocks=1 00:05:33.589 00:05:33.589 ' 00:05:33.589 19:26:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:33.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.589 --rc genhtml_branch_coverage=1 00:05:33.589 --rc genhtml_function_coverage=1 00:05:33.589 --rc genhtml_legend=1 00:05:33.589 --rc geninfo_all_blocks=1 00:05:33.589 --rc geninfo_unexecuted_blocks=1 00:05:33.589 00:05:33.589 ' 00:05:33.589 19:26:20 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:33.589 OK 00:05:33.589 19:26:20 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:33.589 00:05:33.589 real 0m0.199s 00:05:33.589 user 0m0.124s 00:05:33.589 sys 0m0.085s 00:05:33.589 19:26:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:33.589 19:26:20 -- common/autotest_common.sh@10 -- # set +x 00:05:33.589 ************************************ 00:05:33.589 END TEST rpc_client 00:05:33.589 ************************************ 00:05:33.589 19:26:20 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:33.589 19:26:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:33.589 19:26:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.589 19:26:20 -- common/autotest_common.sh@10 -- # set +x 00:05:33.589 ************************************ 00:05:33.589 START TEST json_config 00:05:33.589 ************************************ 00:05:33.589 19:26:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:33.589 19:26:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:33.589 19:26:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:33.589 19:26:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:33.849 19:26:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:33.849 19:26:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:33.849 19:26:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:33.849 19:26:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:33.849 19:26:20 -- scripts/common.sh@335 -- # IFS=.-: 00:05:33.849 19:26:20 -- scripts/common.sh@335 -- # read -ra ver1 00:05:33.849 19:26:20 -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.849 19:26:20 -- scripts/common.sh@336 -- # read -ra ver2 00:05:33.849 19:26:20 -- scripts/common.sh@337 -- # local 'op=<' 00:05:33.849 19:26:20 -- scripts/common.sh@339 -- # ver1_l=2 00:05:33.849 19:26:20 -- scripts/common.sh@340 -- # ver2_l=1 00:05:33.849 19:26:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:33.849 19:26:20 -- scripts/common.sh@343 -- # case "$op" in 00:05:33.849 19:26:20 -- scripts/common.sh@344 -- # : 1 00:05:33.849 19:26:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:33.849 19:26:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.849 19:26:20 -- scripts/common.sh@364 -- # decimal 1 00:05:33.849 19:26:20 -- scripts/common.sh@352 -- # local d=1 00:05:33.849 19:26:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.849 19:26:20 -- scripts/common.sh@354 -- # echo 1 00:05:33.849 19:26:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:33.849 19:26:20 -- scripts/common.sh@365 -- # decimal 2 00:05:33.849 19:26:20 -- scripts/common.sh@352 -- # local d=2 00:05:33.849 19:26:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.849 19:26:20 -- scripts/common.sh@354 -- # echo 2 00:05:33.849 19:26:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:33.849 19:26:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:33.849 19:26:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:33.849 19:26:20 -- scripts/common.sh@367 -- # return 0 00:05:33.849 19:26:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.849 19:26:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:33.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.849 --rc genhtml_branch_coverage=1 00:05:33.849 --rc genhtml_function_coverage=1 00:05:33.849 --rc genhtml_legend=1 00:05:33.849 --rc geninfo_all_blocks=1 00:05:33.849 --rc geninfo_unexecuted_blocks=1 00:05:33.849 00:05:33.849 ' 00:05:33.849 19:26:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:33.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.849 --rc genhtml_branch_coverage=1 00:05:33.849 --rc genhtml_function_coverage=1 00:05:33.849 --rc genhtml_legend=1 00:05:33.849 --rc geninfo_all_blocks=1 00:05:33.849 --rc geninfo_unexecuted_blocks=1 00:05:33.849 00:05:33.849 ' 00:05:33.849 19:26:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:33.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.849 --rc genhtml_branch_coverage=1 00:05:33.849 --rc genhtml_function_coverage=1 00:05:33.849 --rc genhtml_legend=1 00:05:33.849 --rc geninfo_all_blocks=1 00:05:33.849 --rc geninfo_unexecuted_blocks=1 00:05:33.849 00:05:33.849 ' 00:05:33.849 19:26:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:33.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.849 --rc genhtml_branch_coverage=1 00:05:33.849 --rc genhtml_function_coverage=1 00:05:33.849 --rc genhtml_legend=1 00:05:33.849 --rc geninfo_all_blocks=1 00:05:33.849 --rc geninfo_unexecuted_blocks=1 00:05:33.849 00:05:33.849 ' 00:05:33.849 19:26:20 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:33.849 19:26:20 -- nvmf/common.sh@7 -- # uname -s 00:05:33.849 19:26:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:33.849 19:26:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:33.849 19:26:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:33.849 19:26:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:33.849 19:26:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:33.849 19:26:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:33.849 19:26:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:33.849 19:26:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:33.849 19:26:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:33.849 19:26:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:33.849 19:26:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:05:33.849 19:26:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:05:33.849 19:26:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:33.849 19:26:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:33.849 19:26:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:33.849 19:26:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:33.849 19:26:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:33.849 19:26:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:33.849 19:26:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:33.849 19:26:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.849 19:26:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.849 19:26:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.849 19:26:20 -- paths/export.sh@5 -- # export PATH 00:05:33.849 19:26:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.849 19:26:20 -- nvmf/common.sh@46 -- # : 0 00:05:33.849 19:26:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:33.849 19:26:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:33.849 19:26:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:33.849 19:26:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:33.849 19:26:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:33.849 19:26:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:33.849 19:26:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:33.849 19:26:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:33.849 19:26:20 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:33.849 19:26:20 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:33.849 19:26:20 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:33.849 19:26:20 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:33.849 19:26:20 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:33.849 19:26:20 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:33.849 19:26:20 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:33.849 19:26:20 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:33.849 19:26:20 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:33.849 19:26:20 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:33.849 19:26:20 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:33.849 19:26:20 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:33.849 19:26:20 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:33.849 19:26:20 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:33.849 INFO: JSON configuration test init 00:05:33.849 19:26:20 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:33.849 19:26:20 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:33.849 19:26:20 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:33.849 19:26:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:33.849 19:26:20 -- common/autotest_common.sh@10 -- # set +x 00:05:33.849 19:26:20 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:33.849 19:26:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:33.849 19:26:20 -- common/autotest_common.sh@10 -- # set +x 00:05:33.849 19:26:20 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:33.849 19:26:20 -- json_config/json_config.sh@98 -- # local app=target 00:05:33.849 19:26:20 -- json_config/json_config.sh@99 -- # shift 00:05:33.849 19:26:20 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:33.849 19:26:20 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:33.849 19:26:20 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:33.849 19:26:20 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:33.849 19:26:20 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:33.849 19:26:20 -- json_config/json_config.sh@111 -- # app_pid[$app]=67576 00:05:33.849 Waiting for target to run... 00:05:33.849 19:26:20 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:33.849 19:26:20 -- json_config/json_config.sh@114 -- # waitforlisten 67576 /var/tmp/spdk_tgt.sock 00:05:33.849 19:26:20 -- common/autotest_common.sh@829 -- # '[' -z 67576 ']' 00:05:33.849 19:26:20 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:33.849 19:26:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:33.849 19:26:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:33.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:33.849 19:26:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:33.849 19:26:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:33.849 19:26:20 -- common/autotest_common.sh@10 -- # set +x 00:05:33.850 [2024-12-15 19:26:20.664460] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:33.850 [2024-12-15 19:26:20.664603] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67576 ] 00:05:34.416 [2024-12-15 19:26:21.201671] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.416 [2024-12-15 19:26:21.268092] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:34.416 [2024-12-15 19:26:21.268253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.674 19:26:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.674 19:26:21 -- common/autotest_common.sh@862 -- # return 0 00:05:34.674 00:05:34.674 19:26:21 -- json_config/json_config.sh@115 -- # echo '' 00:05:34.674 19:26:21 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:34.674 19:26:21 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:34.674 19:26:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:34.674 19:26:21 -- common/autotest_common.sh@10 -- # set +x 00:05:34.674 19:26:21 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:34.674 19:26:21 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:34.674 19:26:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:34.674 19:26:21 -- common/autotest_common.sh@10 -- # set +x 00:05:34.932 19:26:21 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:34.932 19:26:21 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:34.932 19:26:21 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:35.204 19:26:22 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:35.204 19:26:22 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:35.204 19:26:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:35.204 19:26:22 -- common/autotest_common.sh@10 -- # set +x 00:05:35.204 19:26:22 -- json_config/json_config.sh@48 -- # local ret=0 00:05:35.204 19:26:22 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:35.204 19:26:22 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:35.204 19:26:22 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:35.204 19:26:22 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:35.204 19:26:22 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:35.477 19:26:22 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:35.477 19:26:22 -- json_config/json_config.sh@51 -- # local get_types 00:05:35.477 19:26:22 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:35.477 19:26:22 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:35.477 19:26:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:35.477 19:26:22 -- common/autotest_common.sh@10 -- # set +x 00:05:35.737 19:26:22 -- json_config/json_config.sh@58 -- # return 0 00:05:35.737 19:26:22 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:35.737 19:26:22 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:35.737 19:26:22 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:35.737 19:26:22 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:35.737 19:26:22 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:35.737 19:26:22 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:35.737 19:26:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:35.737 19:26:22 -- common/autotest_common.sh@10 -- # set +x 00:05:35.737 19:26:22 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:35.737 19:26:22 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:35.737 19:26:22 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:35.737 19:26:22 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:35.737 19:26:22 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:35.995 MallocForNvmf0 00:05:35.995 19:26:22 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:35.995 19:26:22 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:36.254 MallocForNvmf1 00:05:36.254 19:26:22 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:36.254 19:26:22 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:36.254 [2024-12-15 19:26:23.106657] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:36.254 19:26:23 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:36.254 19:26:23 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:36.512 19:26:23 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:36.512 19:26:23 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:36.771 19:26:23 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:36.771 19:26:23 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:37.030 19:26:23 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:37.030 19:26:23 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:37.288 [2024-12-15 19:26:24.007279] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:37.288 19:26:24 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:37.288 19:26:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:37.288 19:26:24 -- common/autotest_common.sh@10 -- # set +x 00:05:37.288 19:26:24 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:37.288 19:26:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:37.288 19:26:24 -- common/autotest_common.sh@10 -- # set +x 00:05:37.288 19:26:24 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:37.288 19:26:24 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:37.288 19:26:24 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:37.546 MallocBdevForConfigChangeCheck 00:05:37.546 19:26:24 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:37.547 19:26:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:37.547 19:26:24 -- common/autotest_common.sh@10 -- # set +x 00:05:37.547 19:26:24 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:37.547 19:26:24 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:38.114 INFO: shutting down applications... 00:05:38.114 19:26:24 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:38.114 19:26:24 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:38.114 19:26:24 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:38.114 19:26:24 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:38.114 19:26:24 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:38.372 Calling clear_iscsi_subsystem 00:05:38.372 Calling clear_nvmf_subsystem 00:05:38.372 Calling clear_nbd_subsystem 00:05:38.372 Calling clear_ublk_subsystem 00:05:38.372 Calling clear_vhost_blk_subsystem 00:05:38.372 Calling clear_vhost_scsi_subsystem 00:05:38.372 Calling clear_scheduler_subsystem 00:05:38.372 Calling clear_bdev_subsystem 00:05:38.372 Calling clear_accel_subsystem 00:05:38.372 Calling clear_vmd_subsystem 00:05:38.372 Calling clear_sock_subsystem 00:05:38.372 Calling clear_iobuf_subsystem 00:05:38.372 19:26:25 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:38.372 19:26:25 -- json_config/json_config.sh@396 -- # count=100 00:05:38.372 19:26:25 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:38.372 19:26:25 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:38.372 19:26:25 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:38.372 19:26:25 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:38.630 19:26:25 -- json_config/json_config.sh@398 -- # break 00:05:38.630 19:26:25 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:38.630 19:26:25 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:38.630 19:26:25 -- json_config/json_config.sh@120 -- # local app=target 00:05:38.630 19:26:25 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:38.630 19:26:25 -- json_config/json_config.sh@124 -- # [[ -n 67576 ]] 00:05:38.630 19:26:25 -- json_config/json_config.sh@127 -- # kill -SIGINT 67576 00:05:38.630 19:26:25 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:38.630 19:26:25 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:38.630 19:26:25 -- json_config/json_config.sh@130 -- # kill -0 67576 00:05:38.630 19:26:25 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:39.198 19:26:25 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:39.198 19:26:25 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:39.198 19:26:25 -- json_config/json_config.sh@130 -- # kill -0 67576 00:05:39.198 19:26:25 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:39.198 19:26:25 -- json_config/json_config.sh@132 -- # break 00:05:39.198 19:26:25 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:39.198 19:26:25 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:39.198 SPDK target shutdown done 00:05:39.198 19:26:25 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:39.198 INFO: relaunching applications... 00:05:39.198 19:26:25 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:39.198 19:26:25 -- json_config/json_config.sh@98 -- # local app=target 00:05:39.198 19:26:25 -- json_config/json_config.sh@99 -- # shift 00:05:39.198 Waiting for target to run... 00:05:39.198 19:26:25 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:39.198 19:26:25 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:39.198 19:26:25 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:39.198 19:26:25 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:39.198 19:26:25 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:39.198 19:26:25 -- json_config/json_config.sh@111 -- # app_pid[$app]=67845 00:05:39.198 19:26:25 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:39.198 19:26:25 -- json_config/json_config.sh@114 -- # waitforlisten 67845 /var/tmp/spdk_tgt.sock 00:05:39.198 19:26:25 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:39.198 19:26:25 -- common/autotest_common.sh@829 -- # '[' -z 67845 ']' 00:05:39.198 19:26:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:39.198 19:26:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:39.198 19:26:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:39.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:39.198 19:26:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:39.198 19:26:25 -- common/autotest_common.sh@10 -- # set +x 00:05:39.198 [2024-12-15 19:26:25.963720] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:39.198 [2024-12-15 19:26:25.963843] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67845 ] 00:05:39.765 [2024-12-15 19:26:26.475308] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.765 [2024-12-15 19:26:26.541385] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:39.765 [2024-12-15 19:26:26.541540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.024 [2024-12-15 19:26:26.850835] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:40.024 [2024-12-15 19:26:26.882971] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:40.283 00:05:40.283 INFO: Checking if target configuration is the same... 00:05:40.284 19:26:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:40.284 19:26:26 -- common/autotest_common.sh@862 -- # return 0 00:05:40.284 19:26:26 -- json_config/json_config.sh@115 -- # echo '' 00:05:40.284 19:26:26 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:40.284 19:26:26 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:40.284 19:26:26 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:40.284 19:26:26 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:40.284 19:26:26 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:40.284 + '[' 2 -ne 2 ']' 00:05:40.284 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:40.284 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:40.284 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:40.284 +++ basename /dev/fd/62 00:05:40.284 ++ mktemp /tmp/62.XXX 00:05:40.284 + tmp_file_1=/tmp/62.LM2 00:05:40.284 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:40.284 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:40.284 + tmp_file_2=/tmp/spdk_tgt_config.json.EfX 00:05:40.284 + ret=0 00:05:40.284 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:40.543 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:40.543 + diff -u /tmp/62.LM2 /tmp/spdk_tgt_config.json.EfX 00:05:40.543 INFO: JSON config files are the same 00:05:40.543 + echo 'INFO: JSON config files are the same' 00:05:40.543 + rm /tmp/62.LM2 /tmp/spdk_tgt_config.json.EfX 00:05:40.543 + exit 0 00:05:40.543 INFO: changing configuration and checking if this can be detected... 00:05:40.543 19:26:27 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:40.543 19:26:27 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:40.543 19:26:27 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:40.543 19:26:27 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:40.802 19:26:27 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:40.802 19:26:27 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:40.802 19:26:27 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:40.802 + '[' 2 -ne 2 ']' 00:05:40.802 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:40.802 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:40.802 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:40.802 +++ basename /dev/fd/62 00:05:40.802 ++ mktemp /tmp/62.XXX 00:05:40.802 + tmp_file_1=/tmp/62.Kwg 00:05:40.802 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:40.802 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:40.802 + tmp_file_2=/tmp/spdk_tgt_config.json.D7n 00:05:40.802 + ret=0 00:05:40.802 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:41.368 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:41.368 + diff -u /tmp/62.Kwg /tmp/spdk_tgt_config.json.D7n 00:05:41.368 + ret=1 00:05:41.368 + echo '=== Start of file: /tmp/62.Kwg ===' 00:05:41.368 + cat /tmp/62.Kwg 00:05:41.368 + echo '=== End of file: /tmp/62.Kwg ===' 00:05:41.368 + echo '' 00:05:41.368 + echo '=== Start of file: /tmp/spdk_tgt_config.json.D7n ===' 00:05:41.368 + cat /tmp/spdk_tgt_config.json.D7n 00:05:41.368 + echo '=== End of file: /tmp/spdk_tgt_config.json.D7n ===' 00:05:41.368 + echo '' 00:05:41.368 + rm /tmp/62.Kwg /tmp/spdk_tgt_config.json.D7n 00:05:41.368 + exit 1 00:05:41.368 INFO: configuration change detected. 00:05:41.368 19:26:28 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:41.368 19:26:28 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:41.368 19:26:28 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:41.368 19:26:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:41.368 19:26:28 -- common/autotest_common.sh@10 -- # set +x 00:05:41.368 19:26:28 -- json_config/json_config.sh@360 -- # local ret=0 00:05:41.368 19:26:28 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:41.368 19:26:28 -- json_config/json_config.sh@370 -- # [[ -n 67845 ]] 00:05:41.368 19:26:28 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:41.368 19:26:28 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:41.368 19:26:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:41.368 19:26:28 -- common/autotest_common.sh@10 -- # set +x 00:05:41.368 19:26:28 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:41.368 19:26:28 -- json_config/json_config.sh@246 -- # uname -s 00:05:41.368 19:26:28 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:41.368 19:26:28 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:41.368 19:26:28 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:41.369 19:26:28 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:41.369 19:26:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:41.369 19:26:28 -- common/autotest_common.sh@10 -- # set +x 00:05:41.369 19:26:28 -- json_config/json_config.sh@376 -- # killprocess 67845 00:05:41.369 19:26:28 -- common/autotest_common.sh@936 -- # '[' -z 67845 ']' 00:05:41.369 19:26:28 -- common/autotest_common.sh@940 -- # kill -0 67845 00:05:41.369 19:26:28 -- common/autotest_common.sh@941 -- # uname 00:05:41.369 19:26:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:41.369 19:26:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67845 00:05:41.369 killing process with pid 67845 00:05:41.369 19:26:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:41.369 19:26:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:41.369 19:26:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67845' 00:05:41.369 19:26:28 -- common/autotest_common.sh@955 -- # kill 67845 00:05:41.369 19:26:28 -- common/autotest_common.sh@960 -- # wait 67845 00:05:41.628 19:26:28 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:41.628 19:26:28 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:41.628 19:26:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:41.628 19:26:28 -- common/autotest_common.sh@10 -- # set +x 00:05:41.887 INFO: Success 00:05:41.887 19:26:28 -- json_config/json_config.sh@381 -- # return 0 00:05:41.887 19:26:28 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:41.887 00:05:41.887 real 0m8.131s 00:05:41.887 user 0m11.224s 00:05:41.887 sys 0m2.031s 00:05:41.887 19:26:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:41.887 ************************************ 00:05:41.887 19:26:28 -- common/autotest_common.sh@10 -- # set +x 00:05:41.887 END TEST json_config 00:05:41.887 ************************************ 00:05:41.887 19:26:28 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:41.888 19:26:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:41.888 19:26:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.888 19:26:28 -- common/autotest_common.sh@10 -- # set +x 00:05:41.888 ************************************ 00:05:41.888 START TEST json_config_extra_key 00:05:41.888 ************************************ 00:05:41.888 19:26:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:41.888 19:26:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:41.888 19:26:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:41.888 19:26:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:41.888 19:26:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:41.888 19:26:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:41.888 19:26:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:41.888 19:26:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:41.888 19:26:28 -- scripts/common.sh@335 -- # IFS=.-: 00:05:41.888 19:26:28 -- scripts/common.sh@335 -- # read -ra ver1 00:05:41.888 19:26:28 -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.888 19:26:28 -- scripts/common.sh@336 -- # read -ra ver2 00:05:41.888 19:26:28 -- scripts/common.sh@337 -- # local 'op=<' 00:05:41.888 19:26:28 -- scripts/common.sh@339 -- # ver1_l=2 00:05:41.888 19:26:28 -- scripts/common.sh@340 -- # ver2_l=1 00:05:41.888 19:26:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:41.888 19:26:28 -- scripts/common.sh@343 -- # case "$op" in 00:05:41.888 19:26:28 -- scripts/common.sh@344 -- # : 1 00:05:41.888 19:26:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:41.888 19:26:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.888 19:26:28 -- scripts/common.sh@364 -- # decimal 1 00:05:41.888 19:26:28 -- scripts/common.sh@352 -- # local d=1 00:05:41.888 19:26:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.888 19:26:28 -- scripts/common.sh@354 -- # echo 1 00:05:41.888 19:26:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:41.888 19:26:28 -- scripts/common.sh@365 -- # decimal 2 00:05:41.888 19:26:28 -- scripts/common.sh@352 -- # local d=2 00:05:41.888 19:26:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.888 19:26:28 -- scripts/common.sh@354 -- # echo 2 00:05:41.888 19:26:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:41.888 19:26:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:41.888 19:26:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:41.888 19:26:28 -- scripts/common.sh@367 -- # return 0 00:05:41.888 19:26:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.888 19:26:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:41.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.888 --rc genhtml_branch_coverage=1 00:05:41.888 --rc genhtml_function_coverage=1 00:05:41.888 --rc genhtml_legend=1 00:05:41.888 --rc geninfo_all_blocks=1 00:05:41.888 --rc geninfo_unexecuted_blocks=1 00:05:41.888 00:05:41.888 ' 00:05:41.888 19:26:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:41.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.888 --rc genhtml_branch_coverage=1 00:05:41.888 --rc genhtml_function_coverage=1 00:05:41.888 --rc genhtml_legend=1 00:05:41.888 --rc geninfo_all_blocks=1 00:05:41.888 --rc geninfo_unexecuted_blocks=1 00:05:41.888 00:05:41.888 ' 00:05:41.888 19:26:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:41.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.888 --rc genhtml_branch_coverage=1 00:05:41.888 --rc genhtml_function_coverage=1 00:05:41.888 --rc genhtml_legend=1 00:05:41.888 --rc geninfo_all_blocks=1 00:05:41.888 --rc geninfo_unexecuted_blocks=1 00:05:41.888 00:05:41.888 ' 00:05:41.888 19:26:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:41.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.888 --rc genhtml_branch_coverage=1 00:05:41.888 --rc genhtml_function_coverage=1 00:05:41.888 --rc genhtml_legend=1 00:05:41.888 --rc geninfo_all_blocks=1 00:05:41.888 --rc geninfo_unexecuted_blocks=1 00:05:41.888 00:05:41.888 ' 00:05:41.888 19:26:28 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:41.888 19:26:28 -- nvmf/common.sh@7 -- # uname -s 00:05:41.888 19:26:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:41.888 19:26:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:41.888 19:26:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:41.888 19:26:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:41.888 19:26:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:41.888 19:26:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:41.888 19:26:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:41.888 19:26:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:41.888 19:26:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:41.888 19:26:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:41.888 19:26:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:05:41.888 19:26:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:05:41.888 19:26:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:41.888 19:26:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:41.888 19:26:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:41.888 19:26:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:41.888 19:26:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:41.888 19:26:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:41.888 19:26:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:41.888 19:26:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.888 19:26:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.888 19:26:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.888 19:26:28 -- paths/export.sh@5 -- # export PATH 00:05:41.888 19:26:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.888 19:26:28 -- nvmf/common.sh@46 -- # : 0 00:05:41.888 19:26:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:41.888 19:26:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:41.888 19:26:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:41.888 19:26:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:41.888 19:26:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:41.888 19:26:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:41.888 19:26:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:41.888 19:26:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:41.888 19:26:28 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:41.888 19:26:28 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:41.888 19:26:28 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:41.888 19:26:28 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:41.888 19:26:28 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:41.888 19:26:28 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:41.888 19:26:28 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:41.888 19:26:28 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:41.888 19:26:28 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:41.888 INFO: launching applications... 00:05:41.888 19:26:28 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:41.888 19:26:28 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:41.888 19:26:28 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:41.888 19:26:28 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:41.888 19:26:28 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:41.888 19:26:28 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:41.888 19:26:28 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=68028 00:05:41.888 Waiting for target to run... 00:05:41.888 19:26:28 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:41.888 19:26:28 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:41.888 19:26:28 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 68028 /var/tmp/spdk_tgt.sock 00:05:41.888 19:26:28 -- common/autotest_common.sh@829 -- # '[' -z 68028 ']' 00:05:41.888 19:26:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:41.888 19:26:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:41.888 19:26:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:41.888 19:26:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.888 19:26:28 -- common/autotest_common.sh@10 -- # set +x 00:05:42.147 [2024-12-15 19:26:28.825299] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:42.147 [2024-12-15 19:26:28.825407] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68028 ] 00:05:42.714 [2024-12-15 19:26:29.355499] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.714 [2024-12-15 19:26:29.426887] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:42.714 [2024-12-15 19:26:29.427063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.971 19:26:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.971 19:26:29 -- common/autotest_common.sh@862 -- # return 0 00:05:42.971 00:05:42.971 19:26:29 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:42.971 INFO: shutting down applications... 00:05:42.971 19:26:29 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:42.971 19:26:29 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:42.971 19:26:29 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:42.971 19:26:29 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:42.972 19:26:29 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 68028 ]] 00:05:42.972 19:26:29 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 68028 00:05:42.972 19:26:29 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:42.972 19:26:29 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:42.972 19:26:29 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68028 00:05:42.972 19:26:29 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:43.538 19:26:30 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:43.538 19:26:30 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:43.538 19:26:30 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68028 00:05:43.538 19:26:30 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:44.105 19:26:30 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:44.105 19:26:30 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:44.105 19:26:30 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68028 00:05:44.105 19:26:30 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:44.105 19:26:30 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:44.105 19:26:30 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:44.105 SPDK target shutdown done 00:05:44.105 19:26:30 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:44.105 Success 00:05:44.105 19:26:30 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:44.105 00:05:44.105 real 0m2.212s 00:05:44.105 user 0m1.615s 00:05:44.105 sys 0m0.568s 00:05:44.105 19:26:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:44.105 ************************************ 00:05:44.105 19:26:30 -- common/autotest_common.sh@10 -- # set +x 00:05:44.105 END TEST json_config_extra_key 00:05:44.105 ************************************ 00:05:44.105 19:26:30 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:44.105 19:26:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:44.105 19:26:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:44.105 19:26:30 -- common/autotest_common.sh@10 -- # set +x 00:05:44.105 ************************************ 00:05:44.105 START TEST alias_rpc 00:05:44.105 ************************************ 00:05:44.105 19:26:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:44.105 * Looking for test storage... 00:05:44.105 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:44.105 19:26:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:44.105 19:26:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:44.105 19:26:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:44.364 19:26:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:44.364 19:26:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:44.364 19:26:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:44.364 19:26:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:44.365 19:26:31 -- scripts/common.sh@335 -- # IFS=.-: 00:05:44.365 19:26:31 -- scripts/common.sh@335 -- # read -ra ver1 00:05:44.365 19:26:31 -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.365 19:26:31 -- scripts/common.sh@336 -- # read -ra ver2 00:05:44.365 19:26:31 -- scripts/common.sh@337 -- # local 'op=<' 00:05:44.365 19:26:31 -- scripts/common.sh@339 -- # ver1_l=2 00:05:44.365 19:26:31 -- scripts/common.sh@340 -- # ver2_l=1 00:05:44.365 19:26:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:44.365 19:26:31 -- scripts/common.sh@343 -- # case "$op" in 00:05:44.365 19:26:31 -- scripts/common.sh@344 -- # : 1 00:05:44.365 19:26:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:44.365 19:26:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.365 19:26:31 -- scripts/common.sh@364 -- # decimal 1 00:05:44.365 19:26:31 -- scripts/common.sh@352 -- # local d=1 00:05:44.365 19:26:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.365 19:26:31 -- scripts/common.sh@354 -- # echo 1 00:05:44.365 19:26:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:44.365 19:26:31 -- scripts/common.sh@365 -- # decimal 2 00:05:44.365 19:26:31 -- scripts/common.sh@352 -- # local d=2 00:05:44.365 19:26:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.365 19:26:31 -- scripts/common.sh@354 -- # echo 2 00:05:44.365 19:26:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:44.365 19:26:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:44.365 19:26:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:44.365 19:26:31 -- scripts/common.sh@367 -- # return 0 00:05:44.365 19:26:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.365 19:26:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:44.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.365 --rc genhtml_branch_coverage=1 00:05:44.365 --rc genhtml_function_coverage=1 00:05:44.365 --rc genhtml_legend=1 00:05:44.365 --rc geninfo_all_blocks=1 00:05:44.365 --rc geninfo_unexecuted_blocks=1 00:05:44.365 00:05:44.365 ' 00:05:44.365 19:26:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:44.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.365 --rc genhtml_branch_coverage=1 00:05:44.365 --rc genhtml_function_coverage=1 00:05:44.365 --rc genhtml_legend=1 00:05:44.365 --rc geninfo_all_blocks=1 00:05:44.365 --rc geninfo_unexecuted_blocks=1 00:05:44.365 00:05:44.365 ' 00:05:44.365 19:26:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:44.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.365 --rc genhtml_branch_coverage=1 00:05:44.365 --rc genhtml_function_coverage=1 00:05:44.365 --rc genhtml_legend=1 00:05:44.365 --rc geninfo_all_blocks=1 00:05:44.365 --rc geninfo_unexecuted_blocks=1 00:05:44.365 00:05:44.365 ' 00:05:44.365 19:26:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:44.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.365 --rc genhtml_branch_coverage=1 00:05:44.365 --rc genhtml_function_coverage=1 00:05:44.365 --rc genhtml_legend=1 00:05:44.365 --rc geninfo_all_blocks=1 00:05:44.365 --rc geninfo_unexecuted_blocks=1 00:05:44.365 00:05:44.365 ' 00:05:44.365 19:26:31 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:44.365 19:26:31 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=68118 00:05:44.365 19:26:31 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 68118 00:05:44.365 19:26:31 -- common/autotest_common.sh@829 -- # '[' -z 68118 ']' 00:05:44.365 19:26:31 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:44.365 19:26:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.365 19:26:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.365 19:26:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.365 19:26:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.365 19:26:31 -- common/autotest_common.sh@10 -- # set +x 00:05:44.365 [2024-12-15 19:26:31.103784] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:44.365 [2024-12-15 19:26:31.103924] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68118 ] 00:05:44.365 [2024-12-15 19:26:31.239638] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.624 [2024-12-15 19:26:31.308107] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:44.624 [2024-12-15 19:26:31.308289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.191 19:26:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.191 19:26:32 -- common/autotest_common.sh@862 -- # return 0 00:05:45.191 19:26:32 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:45.760 19:26:32 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 68118 00:05:45.760 19:26:32 -- common/autotest_common.sh@936 -- # '[' -z 68118 ']' 00:05:45.760 19:26:32 -- common/autotest_common.sh@940 -- # kill -0 68118 00:05:45.760 19:26:32 -- common/autotest_common.sh@941 -- # uname 00:05:45.760 19:26:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:45.760 19:26:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68118 00:05:45.760 killing process with pid 68118 00:05:45.760 19:26:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:45.760 19:26:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:45.760 19:26:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68118' 00:05:45.760 19:26:32 -- common/autotest_common.sh@955 -- # kill 68118 00:05:45.760 19:26:32 -- common/autotest_common.sh@960 -- # wait 68118 00:05:46.328 ************************************ 00:05:46.328 END TEST alias_rpc 00:05:46.328 ************************************ 00:05:46.328 00:05:46.328 real 0m2.070s 00:05:46.328 user 0m2.264s 00:05:46.328 sys 0m0.533s 00:05:46.328 19:26:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:46.328 19:26:32 -- common/autotest_common.sh@10 -- # set +x 00:05:46.328 19:26:32 -- spdk/autotest.sh@169 -- # [[ 1 -eq 0 ]] 00:05:46.328 19:26:32 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:46.328 19:26:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:46.328 19:26:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.328 19:26:32 -- common/autotest_common.sh@10 -- # set +x 00:05:46.328 ************************************ 00:05:46.328 START TEST dpdk_mem_utility 00:05:46.328 ************************************ 00:05:46.328 19:26:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:46.328 * Looking for test storage... 00:05:46.328 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:46.328 19:26:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:46.328 19:26:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:46.328 19:26:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:46.328 19:26:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:46.328 19:26:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:46.328 19:26:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:46.328 19:26:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:46.328 19:26:33 -- scripts/common.sh@335 -- # IFS=.-: 00:05:46.328 19:26:33 -- scripts/common.sh@335 -- # read -ra ver1 00:05:46.328 19:26:33 -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.328 19:26:33 -- scripts/common.sh@336 -- # read -ra ver2 00:05:46.328 19:26:33 -- scripts/common.sh@337 -- # local 'op=<' 00:05:46.328 19:26:33 -- scripts/common.sh@339 -- # ver1_l=2 00:05:46.328 19:26:33 -- scripts/common.sh@340 -- # ver2_l=1 00:05:46.328 19:26:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:46.328 19:26:33 -- scripts/common.sh@343 -- # case "$op" in 00:05:46.328 19:26:33 -- scripts/common.sh@344 -- # : 1 00:05:46.328 19:26:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:46.328 19:26:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.328 19:26:33 -- scripts/common.sh@364 -- # decimal 1 00:05:46.328 19:26:33 -- scripts/common.sh@352 -- # local d=1 00:05:46.328 19:26:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.328 19:26:33 -- scripts/common.sh@354 -- # echo 1 00:05:46.328 19:26:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:46.328 19:26:33 -- scripts/common.sh@365 -- # decimal 2 00:05:46.328 19:26:33 -- scripts/common.sh@352 -- # local d=2 00:05:46.328 19:26:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.328 19:26:33 -- scripts/common.sh@354 -- # echo 2 00:05:46.328 19:26:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:46.328 19:26:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:46.328 19:26:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:46.328 19:26:33 -- scripts/common.sh@367 -- # return 0 00:05:46.328 19:26:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.328 19:26:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:46.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.328 --rc genhtml_branch_coverage=1 00:05:46.328 --rc genhtml_function_coverage=1 00:05:46.328 --rc genhtml_legend=1 00:05:46.328 --rc geninfo_all_blocks=1 00:05:46.328 --rc geninfo_unexecuted_blocks=1 00:05:46.328 00:05:46.328 ' 00:05:46.328 19:26:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:46.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.328 --rc genhtml_branch_coverage=1 00:05:46.328 --rc genhtml_function_coverage=1 00:05:46.328 --rc genhtml_legend=1 00:05:46.328 --rc geninfo_all_blocks=1 00:05:46.328 --rc geninfo_unexecuted_blocks=1 00:05:46.328 00:05:46.328 ' 00:05:46.328 19:26:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:46.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.328 --rc genhtml_branch_coverage=1 00:05:46.328 --rc genhtml_function_coverage=1 00:05:46.328 --rc genhtml_legend=1 00:05:46.328 --rc geninfo_all_blocks=1 00:05:46.328 --rc geninfo_unexecuted_blocks=1 00:05:46.328 00:05:46.328 ' 00:05:46.328 19:26:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:46.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.328 --rc genhtml_branch_coverage=1 00:05:46.328 --rc genhtml_function_coverage=1 00:05:46.328 --rc genhtml_legend=1 00:05:46.328 --rc geninfo_all_blocks=1 00:05:46.328 --rc geninfo_unexecuted_blocks=1 00:05:46.328 00:05:46.328 ' 00:05:46.328 19:26:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:46.328 19:26:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=68217 00:05:46.328 19:26:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 68217 00:05:46.328 19:26:33 -- common/autotest_common.sh@829 -- # '[' -z 68217 ']' 00:05:46.328 19:26:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:46.328 19:26:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.328 19:26:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.328 19:26:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.328 19:26:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.328 19:26:33 -- common/autotest_common.sh@10 -- # set +x 00:05:46.587 [2024-12-15 19:26:33.224129] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:46.587 [2024-12-15 19:26:33.224543] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68217 ] 00:05:46.587 [2024-12-15 19:26:33.357837] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.587 [2024-12-15 19:26:33.433313] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:46.587 [2024-12-15 19:26:33.433738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.548 19:26:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.548 19:26:34 -- common/autotest_common.sh@862 -- # return 0 00:05:47.548 19:26:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:47.548 19:26:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:47.548 19:26:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.548 19:26:34 -- common/autotest_common.sh@10 -- # set +x 00:05:47.548 { 00:05:47.548 "filename": "/tmp/spdk_mem_dump.txt" 00:05:47.548 } 00:05:47.548 19:26:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.548 19:26:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:47.548 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:47.548 1 heaps totaling size 814.000000 MiB 00:05:47.548 size: 814.000000 MiB heap id: 0 00:05:47.548 end heaps---------- 00:05:47.548 8 mempools totaling size 598.116089 MiB 00:05:47.548 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:47.548 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:47.548 size: 84.521057 MiB name: bdev_io_68217 00:05:47.548 size: 51.011292 MiB name: evtpool_68217 00:05:47.548 size: 50.003479 MiB name: msgpool_68217 00:05:47.548 size: 21.763794 MiB name: PDU_Pool 00:05:47.548 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:47.548 size: 0.026123 MiB name: Session_Pool 00:05:47.548 end mempools------- 00:05:47.548 6 memzones totaling size 4.142822 MiB 00:05:47.548 size: 1.000366 MiB name: RG_ring_0_68217 00:05:47.548 size: 1.000366 MiB name: RG_ring_1_68217 00:05:47.548 size: 1.000366 MiB name: RG_ring_4_68217 00:05:47.548 size: 1.000366 MiB name: RG_ring_5_68217 00:05:47.548 size: 0.125366 MiB name: RG_ring_2_68217 00:05:47.548 size: 0.015991 MiB name: RG_ring_3_68217 00:05:47.548 end memzones------- 00:05:47.548 19:26:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:47.548 heap id: 0 total size: 814.000000 MiB number of busy elements: 212 number of free elements: 15 00:05:47.548 list of free elements. size: 12.488037 MiB 00:05:47.548 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:47.548 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:47.549 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:47.549 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:47.549 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:47.549 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:47.549 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:47.549 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:47.549 element at address: 0x200000200000 with size: 0.837219 MiB 00:05:47.549 element at address: 0x20001aa00000 with size: 0.572632 MiB 00:05:47.549 element at address: 0x20000b200000 with size: 0.489990 MiB 00:05:47.549 element at address: 0x200000800000 with size: 0.487061 MiB 00:05:47.549 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:47.549 element at address: 0x200027e00000 with size: 0.398682 MiB 00:05:47.549 element at address: 0x200003a00000 with size: 0.351685 MiB 00:05:47.549 list of standard malloc elements. size: 199.249390 MiB 00:05:47.549 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:47.549 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:47.549 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:47.549 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:47.549 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:47.549 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:47.549 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:47.549 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:47.549 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:47.549 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:47.549 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:47.549 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:47.549 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:05:47.549 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:05:47.549 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:05:47.549 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:05:47.549 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:05:47.549 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:05:47.549 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:05:47.549 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:05:47.549 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:05:47.549 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:05:47.549 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:05:47.549 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:05:47.549 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:47.549 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:47.549 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:47.549 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:47.549 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:47.549 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:47.549 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:47.549 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:47.549 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:47.549 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:47.549 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:47.549 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:47.549 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:47.549 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:47.549 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:47.549 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:47.549 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:47.549 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:47.549 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:47.549 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:47.549 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:47.549 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:47.549 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:47.549 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:47.549 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:47.549 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:47.549 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:47.549 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:47.549 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:47.549 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:47.549 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:47.549 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:47.549 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:47.549 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:47.549 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:47.549 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:47.549 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:47.549 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:47.549 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:47.549 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:47.549 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:47.549 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:47.549 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:47.549 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:47.549 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:47.549 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:47.549 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:47.549 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:47.549 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:47.549 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:47.549 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:47.550 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:47.550 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e66100 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e661c0 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6cdc0 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:47.550 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:47.550 list of memzone associated elements. size: 602.262573 MiB 00:05:47.550 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:47.550 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:47.550 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:47.550 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:47.550 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:47.550 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_68217_0 00:05:47.550 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:47.550 associated memzone info: size: 48.002930 MiB name: MP_evtpool_68217_0 00:05:47.550 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:47.550 associated memzone info: size: 48.002930 MiB name: MP_msgpool_68217_0 00:05:47.550 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:47.550 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:47.550 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:47.550 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:47.550 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:47.550 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_68217 00:05:47.550 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:47.550 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_68217 00:05:47.550 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:47.550 associated memzone info: size: 1.007996 MiB name: MP_evtpool_68217 00:05:47.550 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:47.550 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:47.550 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:47.550 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:47.550 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:47.550 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:47.550 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:47.550 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:47.550 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:47.550 associated memzone info: size: 1.000366 MiB name: RG_ring_0_68217 00:05:47.550 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:47.550 associated memzone info: size: 1.000366 MiB name: RG_ring_1_68217 00:05:47.550 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:47.550 associated memzone info: size: 1.000366 MiB name: RG_ring_4_68217 00:05:47.550 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:47.550 associated memzone info: size: 1.000366 MiB name: RG_ring_5_68217 00:05:47.550 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:47.550 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_68217 00:05:47.550 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:47.550 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:47.550 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:47.550 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:47.550 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:47.550 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:47.550 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:47.550 associated memzone info: size: 0.125366 MiB name: RG_ring_2_68217 00:05:47.550 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:47.550 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:47.550 element at address: 0x200027e66280 with size: 0.023743 MiB 00:05:47.550 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:47.550 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:47.550 associated memzone info: size: 0.015991 MiB name: RG_ring_3_68217 00:05:47.550 element at address: 0x200027e6c3c0 with size: 0.002441 MiB 00:05:47.550 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:47.550 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:05:47.550 associated memzone info: size: 0.000183 MiB name: MP_msgpool_68217 00:05:47.550 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:47.550 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_68217 00:05:47.550 element at address: 0x200027e6ce80 with size: 0.000305 MiB 00:05:47.550 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:47.550 19:26:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:47.550 19:26:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 68217 00:05:47.550 19:26:34 -- common/autotest_common.sh@936 -- # '[' -z 68217 ']' 00:05:47.550 19:26:34 -- common/autotest_common.sh@940 -- # kill -0 68217 00:05:47.550 19:26:34 -- common/autotest_common.sh@941 -- # uname 00:05:47.550 19:26:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:47.550 19:26:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68217 00:05:47.550 killing process with pid 68217 00:05:47.550 19:26:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:47.550 19:26:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:47.550 19:26:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68217' 00:05:47.550 19:26:34 -- common/autotest_common.sh@955 -- # kill 68217 00:05:47.550 19:26:34 -- common/autotest_common.sh@960 -- # wait 68217 00:05:48.118 00:05:48.118 real 0m1.914s 00:05:48.118 user 0m1.996s 00:05:48.118 sys 0m0.538s 00:05:48.118 ************************************ 00:05:48.118 END TEST dpdk_mem_utility 00:05:48.118 ************************************ 00:05:48.118 19:26:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:48.118 19:26:34 -- common/autotest_common.sh@10 -- # set +x 00:05:48.118 19:26:34 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:48.118 19:26:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:48.118 19:26:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:48.118 19:26:34 -- common/autotest_common.sh@10 -- # set +x 00:05:48.118 ************************************ 00:05:48.118 START TEST event 00:05:48.118 ************************************ 00:05:48.118 19:26:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:48.377 * Looking for test storage... 00:05:48.377 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:48.377 19:26:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:48.377 19:26:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:48.377 19:26:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:48.377 19:26:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:48.377 19:26:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:48.377 19:26:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:48.377 19:26:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:48.377 19:26:35 -- scripts/common.sh@335 -- # IFS=.-: 00:05:48.377 19:26:35 -- scripts/common.sh@335 -- # read -ra ver1 00:05:48.377 19:26:35 -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.377 19:26:35 -- scripts/common.sh@336 -- # read -ra ver2 00:05:48.377 19:26:35 -- scripts/common.sh@337 -- # local 'op=<' 00:05:48.377 19:26:35 -- scripts/common.sh@339 -- # ver1_l=2 00:05:48.377 19:26:35 -- scripts/common.sh@340 -- # ver2_l=1 00:05:48.377 19:26:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:48.377 19:26:35 -- scripts/common.sh@343 -- # case "$op" in 00:05:48.377 19:26:35 -- scripts/common.sh@344 -- # : 1 00:05:48.377 19:26:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:48.377 19:26:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.377 19:26:35 -- scripts/common.sh@364 -- # decimal 1 00:05:48.377 19:26:35 -- scripts/common.sh@352 -- # local d=1 00:05:48.377 19:26:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.377 19:26:35 -- scripts/common.sh@354 -- # echo 1 00:05:48.377 19:26:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:48.377 19:26:35 -- scripts/common.sh@365 -- # decimal 2 00:05:48.377 19:26:35 -- scripts/common.sh@352 -- # local d=2 00:05:48.377 19:26:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.377 19:26:35 -- scripts/common.sh@354 -- # echo 2 00:05:48.377 19:26:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:48.377 19:26:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:48.377 19:26:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:48.377 19:26:35 -- scripts/common.sh@367 -- # return 0 00:05:48.377 19:26:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.377 19:26:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:48.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.377 --rc genhtml_branch_coverage=1 00:05:48.377 --rc genhtml_function_coverage=1 00:05:48.377 --rc genhtml_legend=1 00:05:48.377 --rc geninfo_all_blocks=1 00:05:48.377 --rc geninfo_unexecuted_blocks=1 00:05:48.377 00:05:48.377 ' 00:05:48.377 19:26:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:48.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.377 --rc genhtml_branch_coverage=1 00:05:48.377 --rc genhtml_function_coverage=1 00:05:48.377 --rc genhtml_legend=1 00:05:48.377 --rc geninfo_all_blocks=1 00:05:48.377 --rc geninfo_unexecuted_blocks=1 00:05:48.377 00:05:48.377 ' 00:05:48.377 19:26:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:48.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.377 --rc genhtml_branch_coverage=1 00:05:48.377 --rc genhtml_function_coverage=1 00:05:48.377 --rc genhtml_legend=1 00:05:48.377 --rc geninfo_all_blocks=1 00:05:48.377 --rc geninfo_unexecuted_blocks=1 00:05:48.377 00:05:48.377 ' 00:05:48.377 19:26:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:48.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.377 --rc genhtml_branch_coverage=1 00:05:48.377 --rc genhtml_function_coverage=1 00:05:48.377 --rc genhtml_legend=1 00:05:48.377 --rc geninfo_all_blocks=1 00:05:48.377 --rc geninfo_unexecuted_blocks=1 00:05:48.377 00:05:48.377 ' 00:05:48.377 19:26:35 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:48.377 19:26:35 -- bdev/nbd_common.sh@6 -- # set -e 00:05:48.377 19:26:35 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:48.377 19:26:35 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:48.377 19:26:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:48.377 19:26:35 -- common/autotest_common.sh@10 -- # set +x 00:05:48.377 ************************************ 00:05:48.377 START TEST event_perf 00:05:48.377 ************************************ 00:05:48.377 19:26:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:48.377 Running I/O for 1 seconds...[2024-12-15 19:26:35.182438] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:48.377 [2024-12-15 19:26:35.183217] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68319 ] 00:05:48.635 [2024-12-15 19:26:35.319685] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:48.635 [2024-12-15 19:26:35.401397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.635 [2024-12-15 19:26:35.401556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:48.635 [2024-12-15 19:26:35.401691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:48.635 Running I/O for 1 seconds...[2024-12-15 19:26:35.402020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.010 00:05:50.010 lcore 0: 123037 00:05:50.010 lcore 1: 123036 00:05:50.010 lcore 2: 123036 00:05:50.010 lcore 3: 123036 00:05:50.010 done. 00:05:50.010 ************************************ 00:05:50.010 END TEST event_perf 00:05:50.010 ************************************ 00:05:50.010 00:05:50.010 real 0m1.357s 00:05:50.010 user 0m4.160s 00:05:50.010 sys 0m0.074s 00:05:50.010 19:26:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:50.010 19:26:36 -- common/autotest_common.sh@10 -- # set +x 00:05:50.010 19:26:36 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:50.010 19:26:36 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:50.010 19:26:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:50.010 19:26:36 -- common/autotest_common.sh@10 -- # set +x 00:05:50.010 ************************************ 00:05:50.010 START TEST event_reactor 00:05:50.010 ************************************ 00:05:50.010 19:26:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:50.010 [2024-12-15 19:26:36.590629] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:50.010 [2024-12-15 19:26:36.590737] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68363 ] 00:05:50.010 [2024-12-15 19:26:36.727045] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.010 [2024-12-15 19:26:36.785398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.387 test_start 00:05:51.387 oneshot 00:05:51.387 tick 100 00:05:51.387 tick 100 00:05:51.387 tick 250 00:05:51.387 tick 100 00:05:51.387 tick 100 00:05:51.387 tick 100 00:05:51.387 tick 250 00:05:51.387 tick 500 00:05:51.387 tick 100 00:05:51.387 tick 100 00:05:51.387 tick 250 00:05:51.387 tick 100 00:05:51.387 tick 100 00:05:51.387 test_end 00:05:51.387 00:05:51.387 real 0m1.302s 00:05:51.387 user 0m1.142s 00:05:51.387 sys 0m0.055s 00:05:51.387 19:26:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:51.387 ************************************ 00:05:51.387 END TEST event_reactor 00:05:51.387 ************************************ 00:05:51.387 19:26:37 -- common/autotest_common.sh@10 -- # set +x 00:05:51.387 19:26:37 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:51.387 19:26:37 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:51.387 19:26:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:51.387 19:26:37 -- common/autotest_common.sh@10 -- # set +x 00:05:51.387 ************************************ 00:05:51.387 START TEST event_reactor_perf 00:05:51.387 ************************************ 00:05:51.387 19:26:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:51.387 [2024-12-15 19:26:37.954884] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:51.387 [2024-12-15 19:26:37.954960] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68393 ] 00:05:51.387 [2024-12-15 19:26:38.075872] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.387 [2024-12-15 19:26:38.135495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.324 test_start 00:05:52.324 test_end 00:05:52.324 Performance: 477083 events per second 00:05:52.324 00:05:52.324 real 0m1.267s 00:05:52.324 user 0m1.105s 00:05:52.324 sys 0m0.057s 00:05:52.324 19:26:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:52.324 19:26:39 -- common/autotest_common.sh@10 -- # set +x 00:05:52.324 ************************************ 00:05:52.324 END TEST event_reactor_perf 00:05:52.324 ************************************ 00:05:52.582 19:26:39 -- event/event.sh@49 -- # uname -s 00:05:52.582 19:26:39 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:52.583 19:26:39 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:52.583 19:26:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:52.583 19:26:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:52.583 19:26:39 -- common/autotest_common.sh@10 -- # set +x 00:05:52.583 ************************************ 00:05:52.583 START TEST event_scheduler 00:05:52.583 ************************************ 00:05:52.583 19:26:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:52.583 * Looking for test storage... 00:05:52.583 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:52.583 19:26:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:52.583 19:26:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:52.583 19:26:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:52.583 19:26:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:52.583 19:26:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:52.583 19:26:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:52.583 19:26:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:52.583 19:26:39 -- scripts/common.sh@335 -- # IFS=.-: 00:05:52.583 19:26:39 -- scripts/common.sh@335 -- # read -ra ver1 00:05:52.583 19:26:39 -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.583 19:26:39 -- scripts/common.sh@336 -- # read -ra ver2 00:05:52.583 19:26:39 -- scripts/common.sh@337 -- # local 'op=<' 00:05:52.583 19:26:39 -- scripts/common.sh@339 -- # ver1_l=2 00:05:52.583 19:26:39 -- scripts/common.sh@340 -- # ver2_l=1 00:05:52.583 19:26:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:52.583 19:26:39 -- scripts/common.sh@343 -- # case "$op" in 00:05:52.583 19:26:39 -- scripts/common.sh@344 -- # : 1 00:05:52.583 19:26:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:52.583 19:26:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.583 19:26:39 -- scripts/common.sh@364 -- # decimal 1 00:05:52.583 19:26:39 -- scripts/common.sh@352 -- # local d=1 00:05:52.583 19:26:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.583 19:26:39 -- scripts/common.sh@354 -- # echo 1 00:05:52.583 19:26:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:52.583 19:26:39 -- scripts/common.sh@365 -- # decimal 2 00:05:52.583 19:26:39 -- scripts/common.sh@352 -- # local d=2 00:05:52.583 19:26:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.583 19:26:39 -- scripts/common.sh@354 -- # echo 2 00:05:52.583 19:26:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:52.583 19:26:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:52.583 19:26:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:52.583 19:26:39 -- scripts/common.sh@367 -- # return 0 00:05:52.583 19:26:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.583 19:26:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:52.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.583 --rc genhtml_branch_coverage=1 00:05:52.583 --rc genhtml_function_coverage=1 00:05:52.583 --rc genhtml_legend=1 00:05:52.583 --rc geninfo_all_blocks=1 00:05:52.583 --rc geninfo_unexecuted_blocks=1 00:05:52.583 00:05:52.583 ' 00:05:52.583 19:26:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:52.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.583 --rc genhtml_branch_coverage=1 00:05:52.583 --rc genhtml_function_coverage=1 00:05:52.583 --rc genhtml_legend=1 00:05:52.583 --rc geninfo_all_blocks=1 00:05:52.583 --rc geninfo_unexecuted_blocks=1 00:05:52.583 00:05:52.583 ' 00:05:52.583 19:26:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:52.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.583 --rc genhtml_branch_coverage=1 00:05:52.583 --rc genhtml_function_coverage=1 00:05:52.583 --rc genhtml_legend=1 00:05:52.583 --rc geninfo_all_blocks=1 00:05:52.583 --rc geninfo_unexecuted_blocks=1 00:05:52.583 00:05:52.583 ' 00:05:52.583 19:26:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:52.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.583 --rc genhtml_branch_coverage=1 00:05:52.583 --rc genhtml_function_coverage=1 00:05:52.583 --rc genhtml_legend=1 00:05:52.583 --rc geninfo_all_blocks=1 00:05:52.583 --rc geninfo_unexecuted_blocks=1 00:05:52.583 00:05:52.583 ' 00:05:52.583 19:26:39 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:52.583 19:26:39 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:52.583 19:26:39 -- scheduler/scheduler.sh@35 -- # scheduler_pid=68456 00:05:52.583 19:26:39 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:52.583 19:26:39 -- scheduler/scheduler.sh@37 -- # waitforlisten 68456 00:05:52.583 19:26:39 -- common/autotest_common.sh@829 -- # '[' -z 68456 ']' 00:05:52.583 19:26:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.583 19:26:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.583 19:26:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.583 19:26:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.583 19:26:39 -- common/autotest_common.sh@10 -- # set +x 00:05:52.842 [2024-12-15 19:26:39.511266] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:52.842 [2024-12-15 19:26:39.511608] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68456 ] 00:05:52.842 [2024-12-15 19:26:39.651380] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:52.842 [2024-12-15 19:26:39.725725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.842 [2024-12-15 19:26:39.725887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.842 [2024-12-15 19:26:39.725965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:52.842 [2024-12-15 19:26:39.725969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:53.777 19:26:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.777 19:26:40 -- common/autotest_common.sh@862 -- # return 0 00:05:53.777 19:26:40 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:53.777 19:26:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.777 19:26:40 -- common/autotest_common.sh@10 -- # set +x 00:05:53.777 POWER: Env isn't set yet! 00:05:53.777 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:53.777 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:53.777 POWER: Cannot set governor of lcore 0 to userspace 00:05:53.777 POWER: Attempting to initialise PSTAT power management... 00:05:53.777 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:53.777 POWER: Cannot set governor of lcore 0 to performance 00:05:53.777 POWER: Attempting to initialise CPPC power management... 00:05:53.777 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:53.777 POWER: Cannot set governor of lcore 0 to userspace 00:05:53.777 POWER: Attempting to initialise VM power management... 00:05:53.777 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:53.777 POWER: Unable to set Power Management Environment for lcore 0 00:05:53.777 [2024-12-15 19:26:40.537424] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:05:53.777 [2024-12-15 19:26:40.537440] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:05:53.777 [2024-12-15 19:26:40.537448] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:05:53.777 [2024-12-15 19:26:40.537461] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:53.777 [2024-12-15 19:26:40.537469] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:53.777 [2024-12-15 19:26:40.537476] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:53.777 19:26:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.777 19:26:40 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:53.777 19:26:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.777 19:26:40 -- common/autotest_common.sh@10 -- # set +x 00:05:53.777 [2024-12-15 19:26:40.659027] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:53.777 19:26:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.777 19:26:40 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:53.777 19:26:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:53.777 19:26:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:53.777 19:26:40 -- common/autotest_common.sh@10 -- # set +x 00:05:54.036 ************************************ 00:05:54.036 START TEST scheduler_create_thread 00:05:54.036 ************************************ 00:05:54.036 19:26:40 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:05:54.036 19:26:40 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:54.036 19:26:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.036 19:26:40 -- common/autotest_common.sh@10 -- # set +x 00:05:54.036 2 00:05:54.036 19:26:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.036 19:26:40 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:54.036 19:26:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.036 19:26:40 -- common/autotest_common.sh@10 -- # set +x 00:05:54.036 3 00:05:54.036 19:26:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.036 19:26:40 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:54.036 19:26:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.036 19:26:40 -- common/autotest_common.sh@10 -- # set +x 00:05:54.036 4 00:05:54.036 19:26:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.036 19:26:40 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:54.036 19:26:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.036 19:26:40 -- common/autotest_common.sh@10 -- # set +x 00:05:54.036 5 00:05:54.036 19:26:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.036 19:26:40 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:54.036 19:26:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.036 19:26:40 -- common/autotest_common.sh@10 -- # set +x 00:05:54.036 6 00:05:54.036 19:26:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.036 19:26:40 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:54.036 19:26:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.036 19:26:40 -- common/autotest_common.sh@10 -- # set +x 00:05:54.036 7 00:05:54.036 19:26:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.036 19:26:40 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:54.036 19:26:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.036 19:26:40 -- common/autotest_common.sh@10 -- # set +x 00:05:54.036 8 00:05:54.036 19:26:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.036 19:26:40 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:54.036 19:26:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.036 19:26:40 -- common/autotest_common.sh@10 -- # set +x 00:05:54.036 9 00:05:54.037 19:26:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.037 19:26:40 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:54.037 19:26:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.037 19:26:40 -- common/autotest_common.sh@10 -- # set +x 00:05:54.037 10 00:05:54.037 19:26:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.037 19:26:40 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:54.037 19:26:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.037 19:26:40 -- common/autotest_common.sh@10 -- # set +x 00:05:54.037 19:26:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.037 19:26:40 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:54.037 19:26:40 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:54.037 19:26:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.037 19:26:40 -- common/autotest_common.sh@10 -- # set +x 00:05:54.037 19:26:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.037 19:26:40 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:54.037 19:26:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.037 19:26:40 -- common/autotest_common.sh@10 -- # set +x 00:05:55.414 19:26:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.414 19:26:42 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:55.414 19:26:42 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:55.414 19:26:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.414 19:26:42 -- common/autotest_common.sh@10 -- # set +x 00:05:56.791 ************************************ 00:05:56.791 END TEST scheduler_create_thread 00:05:56.791 ************************************ 00:05:56.791 19:26:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.791 00:05:56.791 real 0m2.616s 00:05:56.791 user 0m0.016s 00:05:56.791 sys 0m0.010s 00:05:56.791 19:26:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:56.791 19:26:43 -- common/autotest_common.sh@10 -- # set +x 00:05:56.791 19:26:43 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:56.791 19:26:43 -- scheduler/scheduler.sh@46 -- # killprocess 68456 00:05:56.791 19:26:43 -- common/autotest_common.sh@936 -- # '[' -z 68456 ']' 00:05:56.791 19:26:43 -- common/autotest_common.sh@940 -- # kill -0 68456 00:05:56.791 19:26:43 -- common/autotest_common.sh@941 -- # uname 00:05:56.791 19:26:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:56.791 19:26:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68456 00:05:56.791 19:26:43 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:56.791 killing process with pid 68456 00:05:56.791 19:26:43 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:56.791 19:26:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68456' 00:05:56.791 19:26:43 -- common/autotest_common.sh@955 -- # kill 68456 00:05:56.791 19:26:43 -- common/autotest_common.sh@960 -- # wait 68456 00:05:57.049 [2024-12-15 19:26:43.770873] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:57.308 00:05:57.308 real 0m4.766s 00:05:57.308 user 0m9.091s 00:05:57.308 sys 0m0.419s 00:05:57.308 19:26:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:57.308 ************************************ 00:05:57.308 END TEST event_scheduler 00:05:57.308 ************************************ 00:05:57.308 19:26:44 -- common/autotest_common.sh@10 -- # set +x 00:05:57.308 19:26:44 -- event/event.sh@51 -- # modprobe -n nbd 00:05:57.308 19:26:44 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:57.308 19:26:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:57.308 19:26:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.308 19:26:44 -- common/autotest_common.sh@10 -- # set +x 00:05:57.308 ************************************ 00:05:57.308 START TEST app_repeat 00:05:57.308 ************************************ 00:05:57.308 19:26:44 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:05:57.308 19:26:44 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.308 19:26:44 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.308 19:26:44 -- event/event.sh@13 -- # local nbd_list 00:05:57.308 19:26:44 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:57.308 19:26:44 -- event/event.sh@14 -- # local bdev_list 00:05:57.308 19:26:44 -- event/event.sh@15 -- # local repeat_times=4 00:05:57.308 19:26:44 -- event/event.sh@17 -- # modprobe nbd 00:05:57.308 Process app_repeat pid: 68579 00:05:57.308 spdk_app_start Round 0 00:05:57.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:57.308 19:26:44 -- event/event.sh@19 -- # repeat_pid=68579 00:05:57.308 19:26:44 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:57.308 19:26:44 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:57.308 19:26:44 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 68579' 00:05:57.308 19:26:44 -- event/event.sh@23 -- # for i in {0..2} 00:05:57.308 19:26:44 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:57.308 19:26:44 -- event/event.sh@25 -- # waitforlisten 68579 /var/tmp/spdk-nbd.sock 00:05:57.308 19:26:44 -- common/autotest_common.sh@829 -- # '[' -z 68579 ']' 00:05:57.308 19:26:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:57.308 19:26:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.308 19:26:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:57.308 19:26:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.308 19:26:44 -- common/autotest_common.sh@10 -- # set +x 00:05:57.308 [2024-12-15 19:26:44.129345] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:57.308 [2024-12-15 19:26:44.129465] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68579 ] 00:05:57.567 [2024-12-15 19:26:44.266985] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.567 [2024-12-15 19:26:44.328436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.567 [2024-12-15 19:26:44.328473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.500 19:26:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.500 19:26:45 -- common/autotest_common.sh@862 -- # return 0 00:05:58.500 19:26:45 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.500 Malloc0 00:05:58.791 19:26:45 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.791 Malloc1 00:05:58.791 19:26:45 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.791 19:26:45 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.791 19:26:45 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.791 19:26:45 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:58.791 19:26:45 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.791 19:26:45 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:58.791 19:26:45 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.791 19:26:45 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.791 19:26:45 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.791 19:26:45 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:58.791 19:26:45 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.791 19:26:45 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:58.791 19:26:45 -- bdev/nbd_common.sh@12 -- # local i 00:05:58.791 19:26:45 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:58.791 19:26:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.791 19:26:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:59.049 /dev/nbd0 00:05:59.307 19:26:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:59.307 19:26:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:59.307 19:26:45 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:59.307 19:26:45 -- common/autotest_common.sh@867 -- # local i 00:05:59.307 19:26:45 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:59.307 19:26:45 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:59.307 19:26:45 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:59.307 19:26:45 -- common/autotest_common.sh@871 -- # break 00:05:59.307 19:26:45 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:59.307 19:26:45 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:59.307 19:26:45 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:59.307 1+0 records in 00:05:59.307 1+0 records out 00:05:59.307 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371628 s, 11.0 MB/s 00:05:59.307 19:26:45 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:59.307 19:26:45 -- common/autotest_common.sh@884 -- # size=4096 00:05:59.307 19:26:45 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:59.307 19:26:45 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:59.307 19:26:45 -- common/autotest_common.sh@887 -- # return 0 00:05:59.307 19:26:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.307 19:26:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.307 19:26:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:59.566 /dev/nbd1 00:05:59.566 19:26:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:59.566 19:26:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:59.566 19:26:46 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:59.566 19:26:46 -- common/autotest_common.sh@867 -- # local i 00:05:59.566 19:26:46 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:59.566 19:26:46 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:59.566 19:26:46 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:59.566 19:26:46 -- common/autotest_common.sh@871 -- # break 00:05:59.566 19:26:46 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:59.566 19:26:46 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:59.566 19:26:46 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:59.566 1+0 records in 00:05:59.566 1+0 records out 00:05:59.566 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346295 s, 11.8 MB/s 00:05:59.566 19:26:46 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:59.566 19:26:46 -- common/autotest_common.sh@884 -- # size=4096 00:05:59.566 19:26:46 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:59.566 19:26:46 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:59.566 19:26:46 -- common/autotest_common.sh@887 -- # return 0 00:05:59.566 19:26:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.566 19:26:46 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.566 19:26:46 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.566 19:26:46 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.566 19:26:46 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:59.825 19:26:46 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:59.825 { 00:05:59.825 "bdev_name": "Malloc0", 00:05:59.825 "nbd_device": "/dev/nbd0" 00:05:59.825 }, 00:05:59.825 { 00:05:59.825 "bdev_name": "Malloc1", 00:05:59.825 "nbd_device": "/dev/nbd1" 00:05:59.825 } 00:05:59.825 ]' 00:05:59.825 19:26:46 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:59.825 { 00:05:59.825 "bdev_name": "Malloc0", 00:05:59.825 "nbd_device": "/dev/nbd0" 00:05:59.825 }, 00:05:59.825 { 00:05:59.825 "bdev_name": "Malloc1", 00:05:59.825 "nbd_device": "/dev/nbd1" 00:05:59.825 } 00:05:59.825 ]' 00:05:59.825 19:26:46 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.825 19:26:46 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:59.825 /dev/nbd1' 00:05:59.825 19:26:46 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:59.825 /dev/nbd1' 00:05:59.825 19:26:46 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.825 19:26:46 -- bdev/nbd_common.sh@65 -- # count=2 00:05:59.825 19:26:46 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:59.825 19:26:46 -- bdev/nbd_common.sh@95 -- # count=2 00:05:59.825 19:26:46 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:59.825 19:26:46 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:59.825 19:26:46 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.825 19:26:46 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:59.825 19:26:46 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:59.825 19:26:46 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:59.825 19:26:46 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:59.825 19:26:46 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:59.825 256+0 records in 00:05:59.825 256+0 records out 00:05:59.825 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00790021 s, 133 MB/s 00:05:59.825 19:26:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.825 19:26:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:59.825 256+0 records in 00:05:59.825 256+0 records out 00:05:59.825 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233258 s, 45.0 MB/s 00:05:59.825 19:26:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.825 19:26:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:59.825 256+0 records in 00:05:59.825 256+0 records out 00:05:59.825 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0275954 s, 38.0 MB/s 00:05:59.825 19:26:46 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:59.825 19:26:46 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.825 19:26:46 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:59.825 19:26:46 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:59.825 19:26:46 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:59.825 19:26:46 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:59.825 19:26:46 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:59.825 19:26:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.825 19:26:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:59.825 19:26:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.825 19:26:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:59.825 19:26:46 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:59.825 19:26:46 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:59.825 19:26:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.825 19:26:46 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.825 19:26:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:59.825 19:26:46 -- bdev/nbd_common.sh@51 -- # local i 00:05:59.825 19:26:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.825 19:26:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:00.392 19:26:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:00.392 19:26:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:00.392 19:26:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:00.392 19:26:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.392 19:26:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.392 19:26:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:00.392 19:26:47 -- bdev/nbd_common.sh@41 -- # break 00:06:00.392 19:26:47 -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.392 19:26:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.392 19:26:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:00.651 19:26:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:00.651 19:26:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:00.651 19:26:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:00.651 19:26:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.651 19:26:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.651 19:26:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:00.651 19:26:47 -- bdev/nbd_common.sh@41 -- # break 00:06:00.651 19:26:47 -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.651 19:26:47 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.651 19:26:47 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.651 19:26:47 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.651 19:26:47 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:00.651 19:26:47 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:00.651 19:26:47 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.909 19:26:47 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:00.909 19:26:47 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.909 19:26:47 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:00.909 19:26:47 -- bdev/nbd_common.sh@65 -- # true 00:06:00.909 19:26:47 -- bdev/nbd_common.sh@65 -- # count=0 00:06:00.909 19:26:47 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:00.909 19:26:47 -- bdev/nbd_common.sh@104 -- # count=0 00:06:00.909 19:26:47 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:00.909 19:26:47 -- bdev/nbd_common.sh@109 -- # return 0 00:06:00.909 19:26:47 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:01.168 19:26:47 -- event/event.sh@35 -- # sleep 3 00:06:01.427 [2024-12-15 19:26:48.163519] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:01.427 [2024-12-15 19:26:48.216887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.427 [2024-12-15 19:26:48.216893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.427 [2024-12-15 19:26:48.288447] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:01.427 [2024-12-15 19:26:48.288518] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:04.715 spdk_app_start Round 1 00:06:04.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:04.715 19:26:50 -- event/event.sh@23 -- # for i in {0..2} 00:06:04.715 19:26:50 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:04.715 19:26:50 -- event/event.sh@25 -- # waitforlisten 68579 /var/tmp/spdk-nbd.sock 00:06:04.715 19:26:50 -- common/autotest_common.sh@829 -- # '[' -z 68579 ']' 00:06:04.715 19:26:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:04.715 19:26:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.715 19:26:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:04.715 19:26:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.715 19:26:50 -- common/autotest_common.sh@10 -- # set +x 00:06:04.715 19:26:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.715 19:26:51 -- common/autotest_common.sh@862 -- # return 0 00:06:04.715 19:26:51 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.715 Malloc0 00:06:04.715 19:26:51 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.974 Malloc1 00:06:04.974 19:26:51 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.974 19:26:51 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.974 19:26:51 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.974 19:26:51 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:04.974 19:26:51 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.974 19:26:51 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:04.974 19:26:51 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.974 19:26:51 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.974 19:26:51 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.974 19:26:51 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:04.974 19:26:51 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.974 19:26:51 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:04.974 19:26:51 -- bdev/nbd_common.sh@12 -- # local i 00:06:04.974 19:26:51 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:04.974 19:26:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.974 19:26:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:05.233 /dev/nbd0 00:06:05.233 19:26:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:05.233 19:26:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:05.233 19:26:52 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:05.233 19:26:52 -- common/autotest_common.sh@867 -- # local i 00:06:05.233 19:26:52 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:05.233 19:26:52 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:05.233 19:26:52 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:05.233 19:26:52 -- common/autotest_common.sh@871 -- # break 00:06:05.234 19:26:52 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:05.234 19:26:52 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:05.234 19:26:52 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.234 1+0 records in 00:06:05.234 1+0 records out 00:06:05.234 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000172139 s, 23.8 MB/s 00:06:05.234 19:26:52 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:05.234 19:26:52 -- common/autotest_common.sh@884 -- # size=4096 00:06:05.234 19:26:52 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:05.234 19:26:52 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:05.234 19:26:52 -- common/autotest_common.sh@887 -- # return 0 00:06:05.234 19:26:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.234 19:26:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.234 19:26:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:05.493 /dev/nbd1 00:06:05.493 19:26:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:05.493 19:26:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:05.493 19:26:52 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:05.493 19:26:52 -- common/autotest_common.sh@867 -- # local i 00:06:05.493 19:26:52 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:05.493 19:26:52 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:05.493 19:26:52 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:05.493 19:26:52 -- common/autotest_common.sh@871 -- # break 00:06:05.493 19:26:52 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:05.493 19:26:52 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:05.493 19:26:52 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.493 1+0 records in 00:06:05.493 1+0 records out 00:06:05.493 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329979 s, 12.4 MB/s 00:06:05.493 19:26:52 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:05.493 19:26:52 -- common/autotest_common.sh@884 -- # size=4096 00:06:05.493 19:26:52 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:05.493 19:26:52 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:05.493 19:26:52 -- common/autotest_common.sh@887 -- # return 0 00:06:05.493 19:26:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.493 19:26:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.493 19:26:52 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.493 19:26:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.493 19:26:52 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.751 19:26:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:05.751 { 00:06:05.751 "bdev_name": "Malloc0", 00:06:05.751 "nbd_device": "/dev/nbd0" 00:06:05.751 }, 00:06:05.751 { 00:06:05.751 "bdev_name": "Malloc1", 00:06:05.751 "nbd_device": "/dev/nbd1" 00:06:05.751 } 00:06:05.751 ]' 00:06:05.751 19:26:52 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:05.751 { 00:06:05.751 "bdev_name": "Malloc0", 00:06:05.751 "nbd_device": "/dev/nbd0" 00:06:05.751 }, 00:06:05.751 { 00:06:05.751 "bdev_name": "Malloc1", 00:06:05.751 "nbd_device": "/dev/nbd1" 00:06:05.751 } 00:06:05.751 ]' 00:06:05.751 19:26:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.751 19:26:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:05.751 /dev/nbd1' 00:06:05.751 19:26:52 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:05.751 /dev/nbd1' 00:06:05.751 19:26:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.751 19:26:52 -- bdev/nbd_common.sh@65 -- # count=2 00:06:05.751 19:26:52 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:05.751 19:26:52 -- bdev/nbd_common.sh@95 -- # count=2 00:06:05.751 19:26:52 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:05.751 19:26:52 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:05.751 19:26:52 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.751 19:26:52 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.751 19:26:52 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:05.751 19:26:52 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:05.751 19:26:52 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:05.751 19:26:52 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:05.751 256+0 records in 00:06:05.751 256+0 records out 00:06:05.751 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00885011 s, 118 MB/s 00:06:05.751 19:26:52 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.751 19:26:52 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:05.752 256+0 records in 00:06:05.752 256+0 records out 00:06:05.752 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227458 s, 46.1 MB/s 00:06:05.752 19:26:52 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.752 19:26:52 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:06.011 256+0 records in 00:06:06.011 256+0 records out 00:06:06.011 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260647 s, 40.2 MB/s 00:06:06.011 19:26:52 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:06.011 19:26:52 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.011 19:26:52 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.011 19:26:52 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:06.011 19:26:52 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:06.011 19:26:52 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:06.011 19:26:52 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:06.011 19:26:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.011 19:26:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:06.011 19:26:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.011 19:26:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:06.011 19:26:52 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:06.011 19:26:52 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:06.011 19:26:52 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.011 19:26:52 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.011 19:26:52 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:06.011 19:26:52 -- bdev/nbd_common.sh@51 -- # local i 00:06:06.011 19:26:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.011 19:26:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:06.270 19:26:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:06.270 19:26:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:06.270 19:26:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:06.270 19:26:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.270 19:26:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.270 19:26:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:06.270 19:26:52 -- bdev/nbd_common.sh@41 -- # break 00:06:06.270 19:26:52 -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.270 19:26:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.270 19:26:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:06.528 19:26:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:06.528 19:26:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:06.528 19:26:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:06.528 19:26:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.529 19:26:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.529 19:26:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:06.529 19:26:53 -- bdev/nbd_common.sh@41 -- # break 00:06:06.529 19:26:53 -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.529 19:26:53 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.529 19:26:53 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.529 19:26:53 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.787 19:26:53 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:06.787 19:26:53 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:06.787 19:26:53 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.787 19:26:53 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:06.787 19:26:53 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:06.787 19:26:53 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.787 19:26:53 -- bdev/nbd_common.sh@65 -- # true 00:06:06.787 19:26:53 -- bdev/nbd_common.sh@65 -- # count=0 00:06:06.787 19:26:53 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:06.787 19:26:53 -- bdev/nbd_common.sh@104 -- # count=0 00:06:06.787 19:26:53 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:06.787 19:26:53 -- bdev/nbd_common.sh@109 -- # return 0 00:06:06.787 19:26:53 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:07.046 19:26:53 -- event/event.sh@35 -- # sleep 3 00:06:07.305 [2024-12-15 19:26:54.179666] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.564 [2024-12-15 19:26:54.232020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.564 [2024-12-15 19:26:54.232038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.564 [2024-12-15 19:26:54.303440] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:07.564 [2024-12-15 19:26:54.303519] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:10.096 spdk_app_start Round 2 00:06:10.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:10.096 19:26:56 -- event/event.sh@23 -- # for i in {0..2} 00:06:10.096 19:26:56 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:10.096 19:26:56 -- event/event.sh@25 -- # waitforlisten 68579 /var/tmp/spdk-nbd.sock 00:06:10.096 19:26:56 -- common/autotest_common.sh@829 -- # '[' -z 68579 ']' 00:06:10.096 19:26:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:10.096 19:26:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.096 19:26:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:10.096 19:26:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.096 19:26:56 -- common/autotest_common.sh@10 -- # set +x 00:06:10.355 19:26:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.355 19:26:57 -- common/autotest_common.sh@862 -- # return 0 00:06:10.355 19:26:57 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.613 Malloc0 00:06:10.614 19:26:57 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.872 Malloc1 00:06:10.872 19:26:57 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.872 19:26:57 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.872 19:26:57 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.872 19:26:57 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:10.872 19:26:57 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.872 19:26:57 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:10.872 19:26:57 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.872 19:26:57 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.872 19:26:57 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.872 19:26:57 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:10.872 19:26:57 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.872 19:26:57 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:10.872 19:26:57 -- bdev/nbd_common.sh@12 -- # local i 00:06:10.872 19:26:57 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:10.872 19:26:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.872 19:26:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:11.131 /dev/nbd0 00:06:11.131 19:26:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:11.131 19:26:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:11.131 19:26:57 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:11.131 19:26:57 -- common/autotest_common.sh@867 -- # local i 00:06:11.131 19:26:57 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:11.131 19:26:57 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:11.131 19:26:57 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:11.131 19:26:57 -- common/autotest_common.sh@871 -- # break 00:06:11.131 19:26:57 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:11.131 19:26:57 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:11.131 19:26:57 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.131 1+0 records in 00:06:11.131 1+0 records out 00:06:11.131 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00164268 s, 2.5 MB/s 00:06:11.131 19:26:57 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:11.131 19:26:57 -- common/autotest_common.sh@884 -- # size=4096 00:06:11.131 19:26:57 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:11.131 19:26:57 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:11.131 19:26:57 -- common/autotest_common.sh@887 -- # return 0 00:06:11.131 19:26:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.131 19:26:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.131 19:26:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:11.400 /dev/nbd1 00:06:11.400 19:26:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:11.400 19:26:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:11.400 19:26:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:11.400 19:26:58 -- common/autotest_common.sh@867 -- # local i 00:06:11.400 19:26:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:11.400 19:26:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:11.400 19:26:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:11.400 19:26:58 -- common/autotest_common.sh@871 -- # break 00:06:11.400 19:26:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:11.400 19:26:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:11.400 19:26:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.400 1+0 records in 00:06:11.400 1+0 records out 00:06:11.400 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373474 s, 11.0 MB/s 00:06:11.400 19:26:58 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:11.400 19:26:58 -- common/autotest_common.sh@884 -- # size=4096 00:06:11.400 19:26:58 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:11.400 19:26:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:11.400 19:26:58 -- common/autotest_common.sh@887 -- # return 0 00:06:11.400 19:26:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.400 19:26:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.400 19:26:58 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.400 19:26:58 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.400 19:26:58 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.680 19:26:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:11.680 { 00:06:11.680 "bdev_name": "Malloc0", 00:06:11.680 "nbd_device": "/dev/nbd0" 00:06:11.680 }, 00:06:11.680 { 00:06:11.680 "bdev_name": "Malloc1", 00:06:11.680 "nbd_device": "/dev/nbd1" 00:06:11.680 } 00:06:11.680 ]' 00:06:11.680 19:26:58 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:11.680 { 00:06:11.680 "bdev_name": "Malloc0", 00:06:11.680 "nbd_device": "/dev/nbd0" 00:06:11.680 }, 00:06:11.680 { 00:06:11.680 "bdev_name": "Malloc1", 00:06:11.680 "nbd_device": "/dev/nbd1" 00:06:11.680 } 00:06:11.680 ]' 00:06:11.680 19:26:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.680 19:26:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:11.680 /dev/nbd1' 00:06:11.680 19:26:58 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:11.680 /dev/nbd1' 00:06:11.680 19:26:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.680 19:26:58 -- bdev/nbd_common.sh@65 -- # count=2 00:06:11.680 19:26:58 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:11.680 19:26:58 -- bdev/nbd_common.sh@95 -- # count=2 00:06:11.680 19:26:58 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:11.680 19:26:58 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:11.680 19:26:58 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.680 19:26:58 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.680 19:26:58 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:11.680 19:26:58 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:11.680 19:26:58 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:11.680 19:26:58 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:11.680 256+0 records in 00:06:11.680 256+0 records out 00:06:11.680 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00924777 s, 113 MB/s 00:06:11.680 19:26:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.680 19:26:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:11.939 256+0 records in 00:06:11.939 256+0 records out 00:06:11.939 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252516 s, 41.5 MB/s 00:06:11.939 19:26:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.939 19:26:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:11.939 256+0 records in 00:06:11.939 256+0 records out 00:06:11.939 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0299525 s, 35.0 MB/s 00:06:11.939 19:26:58 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:11.939 19:26:58 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.939 19:26:58 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.939 19:26:58 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:11.939 19:26:58 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:11.939 19:26:58 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:11.939 19:26:58 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:11.939 19:26:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.939 19:26:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:11.939 19:26:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.939 19:26:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:11.939 19:26:58 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:11.939 19:26:58 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:11.939 19:26:58 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.939 19:26:58 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.939 19:26:58 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:11.939 19:26:58 -- bdev/nbd_common.sh@51 -- # local i 00:06:11.939 19:26:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.939 19:26:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:12.198 19:26:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:12.198 19:26:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:12.198 19:26:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:12.198 19:26:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.198 19:26:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.198 19:26:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:12.198 19:26:58 -- bdev/nbd_common.sh@41 -- # break 00:06:12.198 19:26:58 -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.198 19:26:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.198 19:26:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:12.457 19:26:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:12.457 19:26:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:12.457 19:26:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:12.457 19:26:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.457 19:26:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.457 19:26:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:12.457 19:26:59 -- bdev/nbd_common.sh@41 -- # break 00:06:12.457 19:26:59 -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.457 19:26:59 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.457 19:26:59 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.457 19:26:59 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.716 19:26:59 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:12.716 19:26:59 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:12.716 19:26:59 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.716 19:26:59 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:12.716 19:26:59 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:12.716 19:26:59 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.716 19:26:59 -- bdev/nbd_common.sh@65 -- # true 00:06:12.716 19:26:59 -- bdev/nbd_common.sh@65 -- # count=0 00:06:12.716 19:26:59 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:12.716 19:26:59 -- bdev/nbd_common.sh@104 -- # count=0 00:06:12.716 19:26:59 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:12.716 19:26:59 -- bdev/nbd_common.sh@109 -- # return 0 00:06:12.716 19:26:59 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:12.974 19:26:59 -- event/event.sh@35 -- # sleep 3 00:06:13.232 [2024-12-15 19:27:00.034230] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:13.232 [2024-12-15 19:27:00.123614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.232 [2024-12-15 19:27:00.123631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.491 [2024-12-15 19:27:00.195716] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:13.492 [2024-12-15 19:27:00.195844] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:16.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:16.023 19:27:02 -- event/event.sh@38 -- # waitforlisten 68579 /var/tmp/spdk-nbd.sock 00:06:16.023 19:27:02 -- common/autotest_common.sh@829 -- # '[' -z 68579 ']' 00:06:16.023 19:27:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:16.023 19:27:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.023 19:27:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:16.023 19:27:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.023 19:27:02 -- common/autotest_common.sh@10 -- # set +x 00:06:16.283 19:27:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.283 19:27:03 -- common/autotest_common.sh@862 -- # return 0 00:06:16.283 19:27:03 -- event/event.sh@39 -- # killprocess 68579 00:06:16.283 19:27:03 -- common/autotest_common.sh@936 -- # '[' -z 68579 ']' 00:06:16.283 19:27:03 -- common/autotest_common.sh@940 -- # kill -0 68579 00:06:16.283 19:27:03 -- common/autotest_common.sh@941 -- # uname 00:06:16.283 19:27:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:16.283 19:27:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68579 00:06:16.283 killing process with pid 68579 00:06:16.283 19:27:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:16.283 19:27:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:16.283 19:27:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68579' 00:06:16.283 19:27:03 -- common/autotest_common.sh@955 -- # kill 68579 00:06:16.283 19:27:03 -- common/autotest_common.sh@960 -- # wait 68579 00:06:16.541 spdk_app_start is called in Round 0. 00:06:16.541 Shutdown signal received, stop current app iteration 00:06:16.541 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:06:16.541 spdk_app_start is called in Round 1. 00:06:16.541 Shutdown signal received, stop current app iteration 00:06:16.541 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:06:16.541 spdk_app_start is called in Round 2. 00:06:16.541 Shutdown signal received, stop current app iteration 00:06:16.541 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:06:16.541 spdk_app_start is called in Round 3. 00:06:16.541 Shutdown signal received, stop current app iteration 00:06:16.541 19:27:03 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:16.541 19:27:03 -- event/event.sh@42 -- # return 0 00:06:16.541 00:06:16.541 real 0m19.250s 00:06:16.541 user 0m43.147s 00:06:16.541 sys 0m3.079s 00:06:16.541 19:27:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:16.542 ************************************ 00:06:16.542 END TEST app_repeat 00:06:16.542 ************************************ 00:06:16.542 19:27:03 -- common/autotest_common.sh@10 -- # set +x 00:06:16.542 19:27:03 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:16.542 19:27:03 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:16.542 19:27:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:16.542 19:27:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.542 19:27:03 -- common/autotest_common.sh@10 -- # set +x 00:06:16.542 ************************************ 00:06:16.542 START TEST cpu_locks 00:06:16.542 ************************************ 00:06:16.542 19:27:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:16.801 * Looking for test storage... 00:06:16.801 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:16.801 19:27:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:16.801 19:27:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:16.801 19:27:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:16.801 19:27:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:16.801 19:27:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:16.801 19:27:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:16.801 19:27:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:16.801 19:27:03 -- scripts/common.sh@335 -- # IFS=.-: 00:06:16.801 19:27:03 -- scripts/common.sh@335 -- # read -ra ver1 00:06:16.801 19:27:03 -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.801 19:27:03 -- scripts/common.sh@336 -- # read -ra ver2 00:06:16.801 19:27:03 -- scripts/common.sh@337 -- # local 'op=<' 00:06:16.801 19:27:03 -- scripts/common.sh@339 -- # ver1_l=2 00:06:16.801 19:27:03 -- scripts/common.sh@340 -- # ver2_l=1 00:06:16.801 19:27:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:16.801 19:27:03 -- scripts/common.sh@343 -- # case "$op" in 00:06:16.801 19:27:03 -- scripts/common.sh@344 -- # : 1 00:06:16.801 19:27:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:16.801 19:27:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.801 19:27:03 -- scripts/common.sh@364 -- # decimal 1 00:06:16.801 19:27:03 -- scripts/common.sh@352 -- # local d=1 00:06:16.801 19:27:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.801 19:27:03 -- scripts/common.sh@354 -- # echo 1 00:06:16.801 19:27:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:16.801 19:27:03 -- scripts/common.sh@365 -- # decimal 2 00:06:16.801 19:27:03 -- scripts/common.sh@352 -- # local d=2 00:06:16.801 19:27:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.801 19:27:03 -- scripts/common.sh@354 -- # echo 2 00:06:16.801 19:27:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:16.801 19:27:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:16.801 19:27:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:16.801 19:27:03 -- scripts/common.sh@367 -- # return 0 00:06:16.801 19:27:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.801 19:27:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:16.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.801 --rc genhtml_branch_coverage=1 00:06:16.801 --rc genhtml_function_coverage=1 00:06:16.801 --rc genhtml_legend=1 00:06:16.801 --rc geninfo_all_blocks=1 00:06:16.801 --rc geninfo_unexecuted_blocks=1 00:06:16.801 00:06:16.801 ' 00:06:16.801 19:27:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:16.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.801 --rc genhtml_branch_coverage=1 00:06:16.801 --rc genhtml_function_coverage=1 00:06:16.801 --rc genhtml_legend=1 00:06:16.801 --rc geninfo_all_blocks=1 00:06:16.801 --rc geninfo_unexecuted_blocks=1 00:06:16.801 00:06:16.801 ' 00:06:16.801 19:27:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:16.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.801 --rc genhtml_branch_coverage=1 00:06:16.801 --rc genhtml_function_coverage=1 00:06:16.801 --rc genhtml_legend=1 00:06:16.801 --rc geninfo_all_blocks=1 00:06:16.801 --rc geninfo_unexecuted_blocks=1 00:06:16.801 00:06:16.801 ' 00:06:16.801 19:27:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:16.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.801 --rc genhtml_branch_coverage=1 00:06:16.801 --rc genhtml_function_coverage=1 00:06:16.801 --rc genhtml_legend=1 00:06:16.801 --rc geninfo_all_blocks=1 00:06:16.801 --rc geninfo_unexecuted_blocks=1 00:06:16.801 00:06:16.801 ' 00:06:16.801 19:27:03 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:16.801 19:27:03 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:16.801 19:27:03 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:16.801 19:27:03 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:16.801 19:27:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:16.801 19:27:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.801 19:27:03 -- common/autotest_common.sh@10 -- # set +x 00:06:16.801 ************************************ 00:06:16.801 START TEST default_locks 00:06:16.801 ************************************ 00:06:16.801 19:27:03 -- common/autotest_common.sh@1114 -- # default_locks 00:06:16.801 19:27:03 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=69216 00:06:16.801 19:27:03 -- event/cpu_locks.sh@47 -- # waitforlisten 69216 00:06:16.801 19:27:03 -- common/autotest_common.sh@829 -- # '[' -z 69216 ']' 00:06:16.801 19:27:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.801 19:27:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.801 19:27:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.801 19:27:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.801 19:27:03 -- common/autotest_common.sh@10 -- # set +x 00:06:16.801 19:27:03 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:16.801 [2024-12-15 19:27:03.652787] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:16.801 [2024-12-15 19:27:03.653498] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69216 ] 00:06:17.063 [2024-12-15 19:27:03.804564] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.063 [2024-12-15 19:27:03.892000] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:17.063 [2024-12-15 19:27:03.892226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.000 19:27:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.000 19:27:04 -- common/autotest_common.sh@862 -- # return 0 00:06:18.000 19:27:04 -- event/cpu_locks.sh@49 -- # locks_exist 69216 00:06:18.000 19:27:04 -- event/cpu_locks.sh@22 -- # lslocks -p 69216 00:06:18.000 19:27:04 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.258 19:27:04 -- event/cpu_locks.sh@50 -- # killprocess 69216 00:06:18.258 19:27:04 -- common/autotest_common.sh@936 -- # '[' -z 69216 ']' 00:06:18.258 19:27:04 -- common/autotest_common.sh@940 -- # kill -0 69216 00:06:18.258 19:27:04 -- common/autotest_common.sh@941 -- # uname 00:06:18.258 19:27:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:18.258 19:27:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69216 00:06:18.258 19:27:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:18.258 killing process with pid 69216 00:06:18.258 19:27:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:18.258 19:27:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69216' 00:06:18.258 19:27:04 -- common/autotest_common.sh@955 -- # kill 69216 00:06:18.258 19:27:04 -- common/autotest_common.sh@960 -- # wait 69216 00:06:18.824 19:27:05 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 69216 00:06:18.824 19:27:05 -- common/autotest_common.sh@650 -- # local es=0 00:06:18.824 19:27:05 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 69216 00:06:18.824 19:27:05 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:18.824 19:27:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.824 19:27:05 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:18.824 19:27:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.824 19:27:05 -- common/autotest_common.sh@653 -- # waitforlisten 69216 00:06:18.824 19:27:05 -- common/autotest_common.sh@829 -- # '[' -z 69216 ']' 00:06:18.824 19:27:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.824 19:27:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.824 19:27:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.824 19:27:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.824 19:27:05 -- common/autotest_common.sh@10 -- # set +x 00:06:18.824 ERROR: process (pid: 69216) is no longer running 00:06:18.824 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (69216) - No such process 00:06:18.824 19:27:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.824 19:27:05 -- common/autotest_common.sh@862 -- # return 1 00:06:18.824 19:27:05 -- common/autotest_common.sh@653 -- # es=1 00:06:18.824 19:27:05 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:18.824 19:27:05 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:18.824 19:27:05 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:18.824 19:27:05 -- event/cpu_locks.sh@54 -- # no_locks 00:06:18.824 19:27:05 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:18.824 19:27:05 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:18.824 19:27:05 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:18.824 00:06:18.824 real 0m1.929s 00:06:18.824 user 0m1.973s 00:06:18.824 sys 0m0.602s 00:06:18.824 19:27:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:18.824 19:27:05 -- common/autotest_common.sh@10 -- # set +x 00:06:18.824 ************************************ 00:06:18.824 END TEST default_locks 00:06:18.824 ************************************ 00:06:18.824 19:27:05 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:18.824 19:27:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:18.824 19:27:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:18.824 19:27:05 -- common/autotest_common.sh@10 -- # set +x 00:06:18.824 ************************************ 00:06:18.824 START TEST default_locks_via_rpc 00:06:18.824 ************************************ 00:06:18.824 19:27:05 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:06:18.824 19:27:05 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=69279 00:06:18.824 19:27:05 -- event/cpu_locks.sh@63 -- # waitforlisten 69279 00:06:18.825 19:27:05 -- common/autotest_common.sh@829 -- # '[' -z 69279 ']' 00:06:18.825 19:27:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.825 19:27:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.825 19:27:05 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:18.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.825 19:27:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.825 19:27:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.825 19:27:05 -- common/autotest_common.sh@10 -- # set +x 00:06:18.825 [2024-12-15 19:27:05.625784] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:18.825 [2024-12-15 19:27:05.625919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69279 ] 00:06:19.083 [2024-12-15 19:27:05.760203] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.083 [2024-12-15 19:27:05.849506] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:19.083 [2024-12-15 19:27:05.849722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.651 19:27:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.651 19:27:06 -- common/autotest_common.sh@862 -- # return 0 00:06:19.651 19:27:06 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:19.651 19:27:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.651 19:27:06 -- common/autotest_common.sh@10 -- # set +x 00:06:19.651 19:27:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.652 19:27:06 -- event/cpu_locks.sh@67 -- # no_locks 00:06:19.652 19:27:06 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:19.652 19:27:06 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:19.652 19:27:06 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:19.652 19:27:06 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:19.652 19:27:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.652 19:27:06 -- common/autotest_common.sh@10 -- # set +x 00:06:19.652 19:27:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.652 19:27:06 -- event/cpu_locks.sh@71 -- # locks_exist 69279 00:06:19.652 19:27:06 -- event/cpu_locks.sh@22 -- # lslocks -p 69279 00:06:19.652 19:27:06 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.218 19:27:06 -- event/cpu_locks.sh@73 -- # killprocess 69279 00:06:20.218 19:27:06 -- common/autotest_common.sh@936 -- # '[' -z 69279 ']' 00:06:20.218 19:27:06 -- common/autotest_common.sh@940 -- # kill -0 69279 00:06:20.218 19:27:06 -- common/autotest_common.sh@941 -- # uname 00:06:20.218 19:27:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:20.218 19:27:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69279 00:06:20.218 19:27:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:20.218 19:27:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:20.218 killing process with pid 69279 00:06:20.218 19:27:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69279' 00:06:20.218 19:27:07 -- common/autotest_common.sh@955 -- # kill 69279 00:06:20.218 19:27:07 -- common/autotest_common.sh@960 -- # wait 69279 00:06:20.784 00:06:20.784 real 0m1.916s 00:06:20.784 user 0m1.895s 00:06:20.784 sys 0m0.651s 00:06:20.784 19:27:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:20.784 19:27:07 -- common/autotest_common.sh@10 -- # set +x 00:06:20.784 ************************************ 00:06:20.784 END TEST default_locks_via_rpc 00:06:20.784 ************************************ 00:06:20.784 19:27:07 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:20.784 19:27:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:20.784 19:27:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.784 19:27:07 -- common/autotest_common.sh@10 -- # set +x 00:06:20.784 ************************************ 00:06:20.784 START TEST non_locking_app_on_locked_coremask 00:06:20.784 ************************************ 00:06:20.784 19:27:07 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:06:20.784 19:27:07 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=69344 00:06:20.784 19:27:07 -- event/cpu_locks.sh@81 -- # waitforlisten 69344 /var/tmp/spdk.sock 00:06:20.784 19:27:07 -- common/autotest_common.sh@829 -- # '[' -z 69344 ']' 00:06:20.784 19:27:07 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:20.784 19:27:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.784 19:27:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.784 19:27:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.784 19:27:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.784 19:27:07 -- common/autotest_common.sh@10 -- # set +x 00:06:20.784 [2024-12-15 19:27:07.608365] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:20.784 [2024-12-15 19:27:07.608466] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69344 ] 00:06:21.042 [2024-12-15 19:27:07.745893] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.042 [2024-12-15 19:27:07.803084] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:21.042 [2024-12-15 19:27:07.803263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.977 19:27:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.977 19:27:08 -- common/autotest_common.sh@862 -- # return 0 00:06:21.977 19:27:08 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=69373 00:06:21.977 19:27:08 -- event/cpu_locks.sh@85 -- # waitforlisten 69373 /var/tmp/spdk2.sock 00:06:21.977 19:27:08 -- common/autotest_common.sh@829 -- # '[' -z 69373 ']' 00:06:21.977 19:27:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.977 19:27:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.977 19:27:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.977 19:27:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.977 19:27:08 -- common/autotest_common.sh@10 -- # set +x 00:06:21.977 19:27:08 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:21.977 [2024-12-15 19:27:08.650599] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:21.977 [2024-12-15 19:27:08.651449] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69373 ] 00:06:21.977 [2024-12-15 19:27:08.788173] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:21.977 [2024-12-15 19:27:08.788236] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.235 [2024-12-15 19:27:08.988208] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:22.235 [2024-12-15 19:27:08.988356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.802 19:27:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.802 19:27:09 -- common/autotest_common.sh@862 -- # return 0 00:06:22.802 19:27:09 -- event/cpu_locks.sh@87 -- # locks_exist 69344 00:06:22.802 19:27:09 -- event/cpu_locks.sh@22 -- # lslocks -p 69344 00:06:22.802 19:27:09 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:23.368 19:27:10 -- event/cpu_locks.sh@89 -- # killprocess 69344 00:06:23.368 19:27:10 -- common/autotest_common.sh@936 -- # '[' -z 69344 ']' 00:06:23.368 19:27:10 -- common/autotest_common.sh@940 -- # kill -0 69344 00:06:23.368 19:27:10 -- common/autotest_common.sh@941 -- # uname 00:06:23.368 19:27:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:23.368 19:27:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69344 00:06:23.368 19:27:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:23.368 killing process with pid 69344 00:06:23.368 19:27:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:23.368 19:27:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69344' 00:06:23.368 19:27:10 -- common/autotest_common.sh@955 -- # kill 69344 00:06:23.368 19:27:10 -- common/autotest_common.sh@960 -- # wait 69344 00:06:24.332 19:27:11 -- event/cpu_locks.sh@90 -- # killprocess 69373 00:06:24.332 19:27:11 -- common/autotest_common.sh@936 -- # '[' -z 69373 ']' 00:06:24.332 19:27:11 -- common/autotest_common.sh@940 -- # kill -0 69373 00:06:24.332 19:27:11 -- common/autotest_common.sh@941 -- # uname 00:06:24.332 19:27:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:24.332 19:27:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69373 00:06:24.332 killing process with pid 69373 00:06:24.332 19:27:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:24.332 19:27:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:24.332 19:27:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69373' 00:06:24.332 19:27:11 -- common/autotest_common.sh@955 -- # kill 69373 00:06:24.332 19:27:11 -- common/autotest_common.sh@960 -- # wait 69373 00:06:24.898 00:06:24.898 real 0m4.143s 00:06:24.898 user 0m4.427s 00:06:24.898 sys 0m1.153s 00:06:24.898 19:27:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:24.898 19:27:11 -- common/autotest_common.sh@10 -- # set +x 00:06:24.898 ************************************ 00:06:24.898 END TEST non_locking_app_on_locked_coremask 00:06:24.898 ************************************ 00:06:24.898 19:27:11 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:24.898 19:27:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:24.898 19:27:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.898 19:27:11 -- common/autotest_common.sh@10 -- # set +x 00:06:24.898 ************************************ 00:06:24.898 START TEST locking_app_on_unlocked_coremask 00:06:24.898 ************************************ 00:06:24.899 19:27:11 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:06:24.899 19:27:11 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=69452 00:06:24.899 19:27:11 -- event/cpu_locks.sh@99 -- # waitforlisten 69452 /var/tmp/spdk.sock 00:06:24.899 19:27:11 -- common/autotest_common.sh@829 -- # '[' -z 69452 ']' 00:06:24.899 19:27:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.899 19:27:11 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:24.899 19:27:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.899 19:27:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.899 19:27:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.899 19:27:11 -- common/autotest_common.sh@10 -- # set +x 00:06:25.157 [2024-12-15 19:27:11.797089] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:25.157 [2024-12-15 19:27:11.797168] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69452 ] 00:06:25.157 [2024-12-15 19:27:11.927102] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:25.157 [2024-12-15 19:27:11.927169] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.157 [2024-12-15 19:27:12.038994] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:25.157 [2024-12-15 19:27:12.039148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:26.093 19:27:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:26.093 19:27:12 -- common/autotest_common.sh@862 -- # return 0 00:06:26.093 19:27:12 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=69480 00:06:26.093 19:27:12 -- event/cpu_locks.sh@103 -- # waitforlisten 69480 /var/tmp/spdk2.sock 00:06:26.093 19:27:12 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:26.093 19:27:12 -- common/autotest_common.sh@829 -- # '[' -z 69480 ']' 00:06:26.093 19:27:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:26.093 19:27:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:26.093 19:27:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:26.093 19:27:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:26.093 19:27:12 -- common/autotest_common.sh@10 -- # set +x 00:06:26.093 [2024-12-15 19:27:12.844219] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:26.093 [2024-12-15 19:27:12.844298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69480 ] 00:06:26.093 [2024-12-15 19:27:12.976187] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.352 [2024-12-15 19:27:13.234941] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:26.352 [2024-12-15 19:27:13.235094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.287 19:27:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.287 19:27:13 -- common/autotest_common.sh@862 -- # return 0 00:06:27.287 19:27:13 -- event/cpu_locks.sh@105 -- # locks_exist 69480 00:06:27.287 19:27:13 -- event/cpu_locks.sh@22 -- # lslocks -p 69480 00:06:27.287 19:27:13 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:27.546 19:27:14 -- event/cpu_locks.sh@107 -- # killprocess 69452 00:06:27.546 19:27:14 -- common/autotest_common.sh@936 -- # '[' -z 69452 ']' 00:06:27.546 19:27:14 -- common/autotest_common.sh@940 -- # kill -0 69452 00:06:27.546 19:27:14 -- common/autotest_common.sh@941 -- # uname 00:06:27.546 19:27:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:27.546 19:27:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69452 00:06:27.546 killing process with pid 69452 00:06:27.546 19:27:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:27.546 19:27:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:27.546 19:27:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69452' 00:06:27.546 19:27:14 -- common/autotest_common.sh@955 -- # kill 69452 00:06:27.546 19:27:14 -- common/autotest_common.sh@960 -- # wait 69452 00:06:28.480 19:27:15 -- event/cpu_locks.sh@108 -- # killprocess 69480 00:06:28.480 19:27:15 -- common/autotest_common.sh@936 -- # '[' -z 69480 ']' 00:06:28.480 19:27:15 -- common/autotest_common.sh@940 -- # kill -0 69480 00:06:28.480 19:27:15 -- common/autotest_common.sh@941 -- # uname 00:06:28.480 19:27:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:28.480 19:27:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69480 00:06:28.480 killing process with pid 69480 00:06:28.480 19:27:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:28.480 19:27:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:28.480 19:27:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69480' 00:06:28.480 19:27:15 -- common/autotest_common.sh@955 -- # kill 69480 00:06:28.480 19:27:15 -- common/autotest_common.sh@960 -- # wait 69480 00:06:29.045 ************************************ 00:06:29.045 END TEST locking_app_on_unlocked_coremask 00:06:29.045 ************************************ 00:06:29.045 00:06:29.045 real 0m4.086s 00:06:29.045 user 0m4.391s 00:06:29.045 sys 0m1.123s 00:06:29.045 19:27:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:29.045 19:27:15 -- common/autotest_common.sh@10 -- # set +x 00:06:29.045 19:27:15 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:29.045 19:27:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:29.045 19:27:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.045 19:27:15 -- common/autotest_common.sh@10 -- # set +x 00:06:29.045 ************************************ 00:06:29.045 START TEST locking_app_on_locked_coremask 00:06:29.045 ************************************ 00:06:29.045 19:27:15 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:06:29.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.045 19:27:15 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=69559 00:06:29.045 19:27:15 -- event/cpu_locks.sh@116 -- # waitforlisten 69559 /var/tmp/spdk.sock 00:06:29.045 19:27:15 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:29.045 19:27:15 -- common/autotest_common.sh@829 -- # '[' -z 69559 ']' 00:06:29.045 19:27:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.045 19:27:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.045 19:27:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.045 19:27:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.045 19:27:15 -- common/autotest_common.sh@10 -- # set +x 00:06:29.304 [2024-12-15 19:27:15.948803] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:29.304 [2024-12-15 19:27:15.948924] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69559 ] 00:06:29.304 [2024-12-15 19:27:16.086612] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.304 [2024-12-15 19:27:16.171752] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:29.304 [2024-12-15 19:27:16.171946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.239 19:27:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.239 19:27:16 -- common/autotest_common.sh@862 -- # return 0 00:06:30.239 19:27:16 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=69587 00:06:30.239 19:27:16 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 69587 /var/tmp/spdk2.sock 00:06:30.239 19:27:16 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:30.239 19:27:16 -- common/autotest_common.sh@650 -- # local es=0 00:06:30.239 19:27:16 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 69587 /var/tmp/spdk2.sock 00:06:30.239 19:27:16 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:30.239 19:27:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:30.239 19:27:16 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:30.239 19:27:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:30.239 19:27:16 -- common/autotest_common.sh@653 -- # waitforlisten 69587 /var/tmp/spdk2.sock 00:06:30.239 19:27:16 -- common/autotest_common.sh@829 -- # '[' -z 69587 ']' 00:06:30.239 19:27:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.239 19:27:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:30.239 19:27:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.239 19:27:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:30.239 19:27:16 -- common/autotest_common.sh@10 -- # set +x 00:06:30.239 [2024-12-15 19:27:16.988372] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:30.239 [2024-12-15 19:27:16.988853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69587 ] 00:06:30.239 [2024-12-15 19:27:17.124080] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 69559 has claimed it. 00:06:30.239 [2024-12-15 19:27:17.124149] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:31.174 ERROR: process (pid: 69587) is no longer running 00:06:31.174 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (69587) - No such process 00:06:31.174 19:27:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.174 19:27:17 -- common/autotest_common.sh@862 -- # return 1 00:06:31.174 19:27:17 -- common/autotest_common.sh@653 -- # es=1 00:06:31.174 19:27:17 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:31.174 19:27:17 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:31.174 19:27:17 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:31.174 19:27:17 -- event/cpu_locks.sh@122 -- # locks_exist 69559 00:06:31.174 19:27:17 -- event/cpu_locks.sh@22 -- # lslocks -p 69559 00:06:31.174 19:27:17 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:31.433 19:27:18 -- event/cpu_locks.sh@124 -- # killprocess 69559 00:06:31.433 19:27:18 -- common/autotest_common.sh@936 -- # '[' -z 69559 ']' 00:06:31.433 19:27:18 -- common/autotest_common.sh@940 -- # kill -0 69559 00:06:31.433 19:27:18 -- common/autotest_common.sh@941 -- # uname 00:06:31.433 19:27:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:31.433 19:27:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69559 00:06:31.433 killing process with pid 69559 00:06:31.433 19:27:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:31.433 19:27:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:31.433 19:27:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69559' 00:06:31.433 19:27:18 -- common/autotest_common.sh@955 -- # kill 69559 00:06:31.433 19:27:18 -- common/autotest_common.sh@960 -- # wait 69559 00:06:31.997 00:06:31.997 real 0m2.822s 00:06:31.997 user 0m3.174s 00:06:31.997 sys 0m0.732s 00:06:31.997 19:27:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:31.997 19:27:18 -- common/autotest_common.sh@10 -- # set +x 00:06:31.997 ************************************ 00:06:31.997 END TEST locking_app_on_locked_coremask 00:06:31.997 ************************************ 00:06:31.997 19:27:18 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:31.997 19:27:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:31.997 19:27:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.997 19:27:18 -- common/autotest_common.sh@10 -- # set +x 00:06:31.997 ************************************ 00:06:31.997 START TEST locking_overlapped_coremask 00:06:31.997 ************************************ 00:06:31.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.997 19:27:18 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:06:31.997 19:27:18 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=69644 00:06:31.997 19:27:18 -- event/cpu_locks.sh@133 -- # waitforlisten 69644 /var/tmp/spdk.sock 00:06:31.997 19:27:18 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:31.997 19:27:18 -- common/autotest_common.sh@829 -- # '[' -z 69644 ']' 00:06:31.997 19:27:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.997 19:27:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:31.997 19:27:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.997 19:27:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:31.997 19:27:18 -- common/autotest_common.sh@10 -- # set +x 00:06:31.997 [2024-12-15 19:27:18.830320] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:31.998 [2024-12-15 19:27:18.830623] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69644 ] 00:06:32.255 [2024-12-15 19:27:18.967283] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:32.255 [2024-12-15 19:27:19.035041] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:32.255 [2024-12-15 19:27:19.035666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.255 [2024-12-15 19:27:19.035849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.255 [2024-12-15 19:27:19.035854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.190 19:27:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.190 19:27:19 -- common/autotest_common.sh@862 -- # return 0 00:06:33.190 19:27:19 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=69674 00:06:33.190 19:27:19 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 69674 /var/tmp/spdk2.sock 00:06:33.190 19:27:19 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:33.190 19:27:19 -- common/autotest_common.sh@650 -- # local es=0 00:06:33.190 19:27:19 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 69674 /var/tmp/spdk2.sock 00:06:33.190 19:27:19 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:33.190 19:27:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:33.190 19:27:19 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:33.190 19:27:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:33.190 19:27:19 -- common/autotest_common.sh@653 -- # waitforlisten 69674 /var/tmp/spdk2.sock 00:06:33.190 19:27:19 -- common/autotest_common.sh@829 -- # '[' -z 69674 ']' 00:06:33.190 19:27:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.190 19:27:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.190 19:27:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.190 19:27:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.190 19:27:19 -- common/autotest_common.sh@10 -- # set +x 00:06:33.190 [2024-12-15 19:27:19.887495] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:33.190 [2024-12-15 19:27:19.888163] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69674 ] 00:06:33.190 [2024-12-15 19:27:20.024352] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 69644 has claimed it. 00:06:33.190 [2024-12-15 19:27:20.024461] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:33.758 ERROR: process (pid: 69674) is no longer running 00:06:33.758 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (69674) - No such process 00:06:33.758 19:27:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.758 19:27:20 -- common/autotest_common.sh@862 -- # return 1 00:06:33.758 19:27:20 -- common/autotest_common.sh@653 -- # es=1 00:06:33.758 19:27:20 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:33.758 19:27:20 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:33.758 19:27:20 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:33.758 19:27:20 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:33.758 19:27:20 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:33.758 19:27:20 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:33.758 19:27:20 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:33.758 19:27:20 -- event/cpu_locks.sh@141 -- # killprocess 69644 00:06:33.758 19:27:20 -- common/autotest_common.sh@936 -- # '[' -z 69644 ']' 00:06:33.758 19:27:20 -- common/autotest_common.sh@940 -- # kill -0 69644 00:06:33.758 19:27:20 -- common/autotest_common.sh@941 -- # uname 00:06:33.758 19:27:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:33.758 19:27:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69644 00:06:33.758 killing process with pid 69644 00:06:33.758 19:27:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:33.758 19:27:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:33.758 19:27:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69644' 00:06:33.758 19:27:20 -- common/autotest_common.sh@955 -- # kill 69644 00:06:33.758 19:27:20 -- common/autotest_common.sh@960 -- # wait 69644 00:06:34.325 ************************************ 00:06:34.325 END TEST locking_overlapped_coremask 00:06:34.325 ************************************ 00:06:34.325 00:06:34.325 real 0m2.373s 00:06:34.325 user 0m6.627s 00:06:34.325 sys 0m0.518s 00:06:34.325 19:27:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:34.325 19:27:21 -- common/autotest_common.sh@10 -- # set +x 00:06:34.325 19:27:21 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:34.325 19:27:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:34.325 19:27:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.325 19:27:21 -- common/autotest_common.sh@10 -- # set +x 00:06:34.325 ************************************ 00:06:34.325 START TEST locking_overlapped_coremask_via_rpc 00:06:34.325 ************************************ 00:06:34.325 19:27:21 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:06:34.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.325 19:27:21 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=69721 00:06:34.325 19:27:21 -- event/cpu_locks.sh@149 -- # waitforlisten 69721 /var/tmp/spdk.sock 00:06:34.325 19:27:21 -- common/autotest_common.sh@829 -- # '[' -z 69721 ']' 00:06:34.325 19:27:21 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:34.325 19:27:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.325 19:27:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.325 19:27:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.325 19:27:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.325 19:27:21 -- common/autotest_common.sh@10 -- # set +x 00:06:34.584 [2024-12-15 19:27:21.260211] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:34.584 [2024-12-15 19:27:21.260736] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69721 ] 00:06:34.584 [2024-12-15 19:27:21.398933] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:34.584 [2024-12-15 19:27:21.398994] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:34.584 [2024-12-15 19:27:21.471367] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:34.584 [2024-12-15 19:27:21.471971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.584 [2024-12-15 19:27:21.472142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.584 [2024-12-15 19:27:21.472148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.520 19:27:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:35.521 19:27:22 -- common/autotest_common.sh@862 -- # return 0 00:06:35.521 19:27:22 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=69754 00:06:35.521 19:27:22 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:35.521 19:27:22 -- event/cpu_locks.sh@153 -- # waitforlisten 69754 /var/tmp/spdk2.sock 00:06:35.521 19:27:22 -- common/autotest_common.sh@829 -- # '[' -z 69754 ']' 00:06:35.521 19:27:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:35.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:35.521 19:27:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:35.521 19:27:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:35.521 19:27:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:35.521 19:27:22 -- common/autotest_common.sh@10 -- # set +x 00:06:35.521 [2024-12-15 19:27:22.321291] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:35.521 [2024-12-15 19:27:22.321409] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69754 ] 00:06:35.779 [2024-12-15 19:27:22.460312] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:35.779 [2024-12-15 19:27:22.460402] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:35.779 [2024-12-15 19:27:22.673977] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:36.038 [2024-12-15 19:27:22.674306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:36.038 [2024-12-15 19:27:22.677949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:36.038 [2024-12-15 19:27:22.677949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:37.441 19:27:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.441 19:27:23 -- common/autotest_common.sh@862 -- # return 0 00:06:37.441 19:27:23 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:37.441 19:27:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.441 19:27:23 -- common/autotest_common.sh@10 -- # set +x 00:06:37.441 19:27:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.441 19:27:24 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:37.441 19:27:24 -- common/autotest_common.sh@650 -- # local es=0 00:06:37.441 19:27:24 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:37.441 19:27:24 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:37.441 19:27:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.441 19:27:24 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:37.441 19:27:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.441 19:27:24 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:37.441 19:27:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.441 19:27:24 -- common/autotest_common.sh@10 -- # set +x 00:06:37.441 [2024-12-15 19:27:24.007993] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 69721 has claimed it. 00:06:37.441 2024/12/15 19:27:24 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:06:37.441 request: 00:06:37.441 { 00:06:37.441 "method": "framework_enable_cpumask_locks", 00:06:37.441 "params": {} 00:06:37.441 } 00:06:37.441 Got JSON-RPC error response 00:06:37.441 GoRPCClient: error on JSON-RPC call 00:06:37.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.441 19:27:24 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:37.441 19:27:24 -- common/autotest_common.sh@653 -- # es=1 00:06:37.441 19:27:24 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:37.441 19:27:24 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:37.441 19:27:24 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:37.441 19:27:24 -- event/cpu_locks.sh@158 -- # waitforlisten 69721 /var/tmp/spdk.sock 00:06:37.441 19:27:24 -- common/autotest_common.sh@829 -- # '[' -z 69721 ']' 00:06:37.441 19:27:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.441 19:27:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.441 19:27:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.441 19:27:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.441 19:27:24 -- common/autotest_common.sh@10 -- # set +x 00:06:37.441 19:27:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.441 19:27:24 -- common/autotest_common.sh@862 -- # return 0 00:06:37.441 19:27:24 -- event/cpu_locks.sh@159 -- # waitforlisten 69754 /var/tmp/spdk2.sock 00:06:37.441 19:27:24 -- common/autotest_common.sh@829 -- # '[' -z 69754 ']' 00:06:37.441 19:27:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:37.441 19:27:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.441 19:27:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:37.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:37.441 19:27:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.441 19:27:24 -- common/autotest_common.sh@10 -- # set +x 00:06:37.700 19:27:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.700 19:27:24 -- common/autotest_common.sh@862 -- # return 0 00:06:37.700 19:27:24 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:37.700 19:27:24 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:37.700 19:27:24 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:37.958 ************************************ 00:06:37.958 END TEST locking_overlapped_coremask_via_rpc 00:06:37.958 ************************************ 00:06:37.958 19:27:24 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:37.958 00:06:37.958 real 0m3.402s 00:06:37.958 user 0m1.576s 00:06:37.958 sys 0m0.263s 00:06:37.958 19:27:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:37.958 19:27:24 -- common/autotest_common.sh@10 -- # set +x 00:06:37.958 19:27:24 -- event/cpu_locks.sh@174 -- # cleanup 00:06:37.958 19:27:24 -- event/cpu_locks.sh@15 -- # [[ -z 69721 ]] 00:06:37.958 19:27:24 -- event/cpu_locks.sh@15 -- # killprocess 69721 00:06:37.958 19:27:24 -- common/autotest_common.sh@936 -- # '[' -z 69721 ']' 00:06:37.958 19:27:24 -- common/autotest_common.sh@940 -- # kill -0 69721 00:06:37.958 19:27:24 -- common/autotest_common.sh@941 -- # uname 00:06:37.958 19:27:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:37.958 19:27:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69721 00:06:37.958 killing process with pid 69721 00:06:37.958 19:27:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:37.958 19:27:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:37.958 19:27:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69721' 00:06:37.958 19:27:24 -- common/autotest_common.sh@955 -- # kill 69721 00:06:37.958 19:27:24 -- common/autotest_common.sh@960 -- # wait 69721 00:06:38.523 19:27:25 -- event/cpu_locks.sh@16 -- # [[ -z 69754 ]] 00:06:38.523 19:27:25 -- event/cpu_locks.sh@16 -- # killprocess 69754 00:06:38.523 19:27:25 -- common/autotest_common.sh@936 -- # '[' -z 69754 ']' 00:06:38.523 19:27:25 -- common/autotest_common.sh@940 -- # kill -0 69754 00:06:38.523 19:27:25 -- common/autotest_common.sh@941 -- # uname 00:06:38.523 19:27:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:38.523 19:27:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69754 00:06:38.523 killing process with pid 69754 00:06:38.523 19:27:25 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:38.523 19:27:25 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:38.523 19:27:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69754' 00:06:38.523 19:27:25 -- common/autotest_common.sh@955 -- # kill 69754 00:06:38.523 19:27:25 -- common/autotest_common.sh@960 -- # wait 69754 00:06:39.090 19:27:25 -- event/cpu_locks.sh@18 -- # rm -f 00:06:39.090 19:27:25 -- event/cpu_locks.sh@1 -- # cleanup 00:06:39.090 19:27:25 -- event/cpu_locks.sh@15 -- # [[ -z 69721 ]] 00:06:39.090 19:27:25 -- event/cpu_locks.sh@15 -- # killprocess 69721 00:06:39.090 Process with pid 69721 is not found 00:06:39.090 Process with pid 69754 is not found 00:06:39.090 19:27:25 -- common/autotest_common.sh@936 -- # '[' -z 69721 ']' 00:06:39.090 19:27:25 -- common/autotest_common.sh@940 -- # kill -0 69721 00:06:39.090 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (69721) - No such process 00:06:39.090 19:27:25 -- common/autotest_common.sh@963 -- # echo 'Process with pid 69721 is not found' 00:06:39.090 19:27:25 -- event/cpu_locks.sh@16 -- # [[ -z 69754 ]] 00:06:39.090 19:27:25 -- event/cpu_locks.sh@16 -- # killprocess 69754 00:06:39.090 19:27:25 -- common/autotest_common.sh@936 -- # '[' -z 69754 ']' 00:06:39.090 19:27:25 -- common/autotest_common.sh@940 -- # kill -0 69754 00:06:39.090 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (69754) - No such process 00:06:39.090 19:27:25 -- common/autotest_common.sh@963 -- # echo 'Process with pid 69754 is not found' 00:06:39.090 19:27:25 -- event/cpu_locks.sh@18 -- # rm -f 00:06:39.090 00:06:39.090 real 0m22.321s 00:06:39.090 user 0m41.122s 00:06:39.090 sys 0m6.094s 00:06:39.090 19:27:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:39.090 ************************************ 00:06:39.090 END TEST cpu_locks 00:06:39.090 ************************************ 00:06:39.090 19:27:25 -- common/autotest_common.sh@10 -- # set +x 00:06:39.090 ************************************ 00:06:39.090 END TEST event 00:06:39.090 ************************************ 00:06:39.090 00:06:39.090 real 0m50.813s 00:06:39.090 user 1m39.973s 00:06:39.090 sys 0m10.079s 00:06:39.090 19:27:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:39.090 19:27:25 -- common/autotest_common.sh@10 -- # set +x 00:06:39.090 19:27:25 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:39.090 19:27:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:39.090 19:27:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:39.090 19:27:25 -- common/autotest_common.sh@10 -- # set +x 00:06:39.090 ************************************ 00:06:39.090 START TEST thread 00:06:39.090 ************************************ 00:06:39.090 19:27:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:39.090 * Looking for test storage... 00:06:39.090 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:39.090 19:27:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:39.090 19:27:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:39.090 19:27:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:39.090 19:27:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:39.090 19:27:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:39.090 19:27:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:39.090 19:27:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:39.090 19:27:25 -- scripts/common.sh@335 -- # IFS=.-: 00:06:39.090 19:27:25 -- scripts/common.sh@335 -- # read -ra ver1 00:06:39.090 19:27:25 -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.090 19:27:25 -- scripts/common.sh@336 -- # read -ra ver2 00:06:39.090 19:27:25 -- scripts/common.sh@337 -- # local 'op=<' 00:06:39.090 19:27:25 -- scripts/common.sh@339 -- # ver1_l=2 00:06:39.090 19:27:25 -- scripts/common.sh@340 -- # ver2_l=1 00:06:39.090 19:27:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:39.090 19:27:25 -- scripts/common.sh@343 -- # case "$op" in 00:06:39.090 19:27:25 -- scripts/common.sh@344 -- # : 1 00:06:39.090 19:27:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:39.090 19:27:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.090 19:27:25 -- scripts/common.sh@364 -- # decimal 1 00:06:39.090 19:27:25 -- scripts/common.sh@352 -- # local d=1 00:06:39.090 19:27:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.090 19:27:25 -- scripts/common.sh@354 -- # echo 1 00:06:39.090 19:27:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:39.090 19:27:25 -- scripts/common.sh@365 -- # decimal 2 00:06:39.090 19:27:25 -- scripts/common.sh@352 -- # local d=2 00:06:39.090 19:27:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.090 19:27:25 -- scripts/common.sh@354 -- # echo 2 00:06:39.090 19:27:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:39.090 19:27:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:39.090 19:27:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:39.090 19:27:25 -- scripts/common.sh@367 -- # return 0 00:06:39.090 19:27:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.090 19:27:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:39.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.091 --rc genhtml_branch_coverage=1 00:06:39.091 --rc genhtml_function_coverage=1 00:06:39.091 --rc genhtml_legend=1 00:06:39.091 --rc geninfo_all_blocks=1 00:06:39.091 --rc geninfo_unexecuted_blocks=1 00:06:39.091 00:06:39.091 ' 00:06:39.091 19:27:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:39.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.091 --rc genhtml_branch_coverage=1 00:06:39.091 --rc genhtml_function_coverage=1 00:06:39.091 --rc genhtml_legend=1 00:06:39.091 --rc geninfo_all_blocks=1 00:06:39.091 --rc geninfo_unexecuted_blocks=1 00:06:39.091 00:06:39.091 ' 00:06:39.091 19:27:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:39.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.091 --rc genhtml_branch_coverage=1 00:06:39.091 --rc genhtml_function_coverage=1 00:06:39.091 --rc genhtml_legend=1 00:06:39.091 --rc geninfo_all_blocks=1 00:06:39.091 --rc geninfo_unexecuted_blocks=1 00:06:39.091 00:06:39.091 ' 00:06:39.091 19:27:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:39.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.091 --rc genhtml_branch_coverage=1 00:06:39.091 --rc genhtml_function_coverage=1 00:06:39.091 --rc genhtml_legend=1 00:06:39.091 --rc geninfo_all_blocks=1 00:06:39.091 --rc geninfo_unexecuted_blocks=1 00:06:39.091 00:06:39.091 ' 00:06:39.091 19:27:25 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:39.091 19:27:25 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:39.349 19:27:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:39.349 19:27:25 -- common/autotest_common.sh@10 -- # set +x 00:06:39.349 ************************************ 00:06:39.349 START TEST thread_poller_perf 00:06:39.349 ************************************ 00:06:39.349 19:27:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:39.349 [2024-12-15 19:27:26.012671] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:39.349 [2024-12-15 19:27:26.012760] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69928 ] 00:06:39.349 [2024-12-15 19:27:26.147491] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.349 [2024-12-15 19:27:26.215300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.349 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:40.731 [2024-12-15T19:27:27.627Z] ====================================== 00:06:40.731 [2024-12-15T19:27:27.627Z] busy:2207252904 (cyc) 00:06:40.731 [2024-12-15T19:27:27.627Z] total_run_count: 388000 00:06:40.731 [2024-12-15T19:27:27.627Z] tsc_hz: 2200000000 (cyc) 00:06:40.731 [2024-12-15T19:27:27.627Z] ====================================== 00:06:40.731 [2024-12-15T19:27:27.627Z] poller_cost: 5688 (cyc), 2585 (nsec) 00:06:40.731 00:06:40.731 real 0m1.308s 00:06:40.731 user 0m1.140s 00:06:40.731 sys 0m0.061s 00:06:40.731 19:27:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:40.731 19:27:27 -- common/autotest_common.sh@10 -- # set +x 00:06:40.731 ************************************ 00:06:40.731 END TEST thread_poller_perf 00:06:40.731 ************************************ 00:06:40.731 19:27:27 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:40.731 19:27:27 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:40.731 19:27:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.731 19:27:27 -- common/autotest_common.sh@10 -- # set +x 00:06:40.731 ************************************ 00:06:40.731 START TEST thread_poller_perf 00:06:40.731 ************************************ 00:06:40.731 19:27:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:40.731 [2024-12-15 19:27:27.379313] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:40.731 [2024-12-15 19:27:27.379417] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69958 ] 00:06:40.731 [2024-12-15 19:27:27.515180] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.990 [2024-12-15 19:27:27.640039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.990 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:41.925 [2024-12-15T19:27:28.821Z] ====================================== 00:06:41.925 [2024-12-15T19:27:28.821Z] busy:2202839428 (cyc) 00:06:41.925 [2024-12-15T19:27:28.821Z] total_run_count: 5374000 00:06:41.925 [2024-12-15T19:27:28.821Z] tsc_hz: 2200000000 (cyc) 00:06:41.925 [2024-12-15T19:27:28.821Z] ====================================== 00:06:41.925 [2024-12-15T19:27:28.821Z] poller_cost: 409 (cyc), 185 (nsec) 00:06:41.925 00:06:41.925 real 0m1.420s 00:06:41.925 user 0m1.236s 00:06:41.925 sys 0m0.076s 00:06:41.925 19:27:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:41.925 19:27:28 -- common/autotest_common.sh@10 -- # set +x 00:06:41.925 ************************************ 00:06:41.925 END TEST thread_poller_perf 00:06:41.925 ************************************ 00:06:42.183 19:27:28 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:42.183 00:06:42.183 real 0m3.004s 00:06:42.183 user 0m2.495s 00:06:42.183 sys 0m0.294s 00:06:42.183 19:27:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:42.183 19:27:28 -- common/autotest_common.sh@10 -- # set +x 00:06:42.183 ************************************ 00:06:42.183 END TEST thread 00:06:42.183 ************************************ 00:06:42.183 19:27:28 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:42.183 19:27:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:42.183 19:27:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.183 19:27:28 -- common/autotest_common.sh@10 -- # set +x 00:06:42.183 ************************************ 00:06:42.183 START TEST accel 00:06:42.183 ************************************ 00:06:42.183 19:27:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:42.183 * Looking for test storage... 00:06:42.183 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:42.183 19:27:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:42.183 19:27:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:42.183 19:27:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:42.183 19:27:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:42.183 19:27:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:42.183 19:27:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:42.183 19:27:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:42.183 19:27:29 -- scripts/common.sh@335 -- # IFS=.-: 00:06:42.183 19:27:29 -- scripts/common.sh@335 -- # read -ra ver1 00:06:42.183 19:27:29 -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.183 19:27:29 -- scripts/common.sh@336 -- # read -ra ver2 00:06:42.183 19:27:29 -- scripts/common.sh@337 -- # local 'op=<' 00:06:42.183 19:27:29 -- scripts/common.sh@339 -- # ver1_l=2 00:06:42.183 19:27:29 -- scripts/common.sh@340 -- # ver2_l=1 00:06:42.183 19:27:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:42.183 19:27:29 -- scripts/common.sh@343 -- # case "$op" in 00:06:42.183 19:27:29 -- scripts/common.sh@344 -- # : 1 00:06:42.184 19:27:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:42.184 19:27:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.184 19:27:29 -- scripts/common.sh@364 -- # decimal 1 00:06:42.184 19:27:29 -- scripts/common.sh@352 -- # local d=1 00:06:42.184 19:27:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.184 19:27:29 -- scripts/common.sh@354 -- # echo 1 00:06:42.184 19:27:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:42.184 19:27:29 -- scripts/common.sh@365 -- # decimal 2 00:06:42.184 19:27:29 -- scripts/common.sh@352 -- # local d=2 00:06:42.184 19:27:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.184 19:27:29 -- scripts/common.sh@354 -- # echo 2 00:06:42.184 19:27:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:42.184 19:27:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:42.184 19:27:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:42.184 19:27:29 -- scripts/common.sh@367 -- # return 0 00:06:42.184 19:27:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.184 19:27:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:42.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.184 --rc genhtml_branch_coverage=1 00:06:42.184 --rc genhtml_function_coverage=1 00:06:42.184 --rc genhtml_legend=1 00:06:42.184 --rc geninfo_all_blocks=1 00:06:42.184 --rc geninfo_unexecuted_blocks=1 00:06:42.184 00:06:42.184 ' 00:06:42.184 19:27:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:42.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.184 --rc genhtml_branch_coverage=1 00:06:42.184 --rc genhtml_function_coverage=1 00:06:42.184 --rc genhtml_legend=1 00:06:42.184 --rc geninfo_all_blocks=1 00:06:42.184 --rc geninfo_unexecuted_blocks=1 00:06:42.184 00:06:42.184 ' 00:06:42.184 19:27:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:42.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.184 --rc genhtml_branch_coverage=1 00:06:42.184 --rc genhtml_function_coverage=1 00:06:42.184 --rc genhtml_legend=1 00:06:42.184 --rc geninfo_all_blocks=1 00:06:42.184 --rc geninfo_unexecuted_blocks=1 00:06:42.184 00:06:42.184 ' 00:06:42.184 19:27:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:42.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.184 --rc genhtml_branch_coverage=1 00:06:42.184 --rc genhtml_function_coverage=1 00:06:42.184 --rc genhtml_legend=1 00:06:42.184 --rc geninfo_all_blocks=1 00:06:42.184 --rc geninfo_unexecuted_blocks=1 00:06:42.184 00:06:42.184 ' 00:06:42.184 19:27:29 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:42.184 19:27:29 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:42.184 19:27:29 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:42.184 19:27:29 -- accel/accel.sh@59 -- # spdk_tgt_pid=70045 00:06:42.184 19:27:29 -- accel/accel.sh@60 -- # waitforlisten 70045 00:06:42.184 19:27:29 -- common/autotest_common.sh@829 -- # '[' -z 70045 ']' 00:06:42.184 19:27:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.184 19:27:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.184 19:27:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.184 19:27:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.184 19:27:29 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:42.184 19:27:29 -- common/autotest_common.sh@10 -- # set +x 00:06:42.184 19:27:29 -- accel/accel.sh@58 -- # build_accel_config 00:06:42.184 19:27:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.184 19:27:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.184 19:27:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.184 19:27:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.184 19:27:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.184 19:27:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.184 19:27:29 -- accel/accel.sh@42 -- # jq -r . 00:06:42.442 [2024-12-15 19:27:29.125419] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:42.442 [2024-12-15 19:27:29.125570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70045 ] 00:06:42.442 [2024-12-15 19:27:29.263471] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.442 [2024-12-15 19:27:29.334209] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:42.442 [2024-12-15 19:27:29.334416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.376 19:27:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.377 19:27:30 -- common/autotest_common.sh@862 -- # return 0 00:06:43.377 19:27:30 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:43.377 19:27:30 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:43.377 19:27:30 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:43.377 19:27:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.377 19:27:30 -- common/autotest_common.sh@10 -- # set +x 00:06:43.377 19:27:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.377 19:27:30 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.377 19:27:30 -- accel/accel.sh@64 -- # IFS== 00:06:43.377 19:27:30 -- accel/accel.sh@64 -- # read -r opc module 00:06:43.377 19:27:30 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:43.377 19:27:30 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.377 19:27:30 -- accel/accel.sh@64 -- # IFS== 00:06:43.377 19:27:30 -- accel/accel.sh@64 -- # read -r opc module 00:06:43.377 19:27:30 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:43.377 19:27:30 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.377 19:27:30 -- accel/accel.sh@64 -- # IFS== 00:06:43.377 19:27:30 -- accel/accel.sh@64 -- # read -r opc module 00:06:43.377 19:27:30 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:43.377 19:27:30 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.377 19:27:30 -- accel/accel.sh@64 -- # IFS== 00:06:43.377 19:27:30 -- accel/accel.sh@64 -- # read -r opc module 00:06:43.377 19:27:30 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:43.377 19:27:30 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.377 19:27:30 -- accel/accel.sh@64 -- # IFS== 00:06:43.377 19:27:30 -- accel/accel.sh@64 -- # read -r opc module 00:06:43.377 19:27:30 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:43.377 19:27:30 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.377 19:27:30 -- accel/accel.sh@64 -- # IFS== 00:06:43.377 19:27:30 -- accel/accel.sh@64 -- # read -r opc module 00:06:43.377 19:27:30 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:43.377 19:27:30 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.377 19:27:30 -- accel/accel.sh@64 -- # IFS== 00:06:43.377 19:27:30 -- accel/accel.sh@64 -- # read -r opc module 00:06:43.377 19:27:30 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:43.377 19:27:30 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.377 19:27:30 -- accel/accel.sh@64 -- # IFS== 00:06:43.377 19:27:30 -- accel/accel.sh@64 -- # read -r opc module 00:06:43.377 19:27:30 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:43.377 19:27:30 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.377 19:27:30 -- accel/accel.sh@64 -- # IFS== 00:06:43.377 19:27:30 -- accel/accel.sh@64 -- # read -r opc module 00:06:43.377 19:27:30 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:43.377 19:27:30 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.377 19:27:30 -- accel/accel.sh@64 -- # IFS== 00:06:43.377 19:27:30 -- accel/accel.sh@64 -- # read -r opc module 00:06:43.377 19:27:30 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:43.377 19:27:30 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.377 19:27:30 -- accel/accel.sh@64 -- # IFS== 00:06:43.377 19:27:30 -- accel/accel.sh@64 -- # read -r opc module 00:06:43.377 19:27:30 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:43.377 19:27:30 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.377 19:27:30 -- accel/accel.sh@64 -- # IFS== 00:06:43.377 19:27:30 -- accel/accel.sh@64 -- # read -r opc module 00:06:43.377 19:27:30 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:43.377 19:27:30 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.377 19:27:30 -- accel/accel.sh@64 -- # IFS== 00:06:43.377 19:27:30 -- accel/accel.sh@64 -- # read -r opc module 00:06:43.377 19:27:30 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:43.377 19:27:30 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.377 19:27:30 -- accel/accel.sh@64 -- # IFS== 00:06:43.377 19:27:30 -- accel/accel.sh@64 -- # read -r opc module 00:06:43.377 19:27:30 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:43.377 19:27:30 -- accel/accel.sh@67 -- # killprocess 70045 00:06:43.377 19:27:30 -- common/autotest_common.sh@936 -- # '[' -z 70045 ']' 00:06:43.377 19:27:30 -- common/autotest_common.sh@940 -- # kill -0 70045 00:06:43.377 19:27:30 -- common/autotest_common.sh@941 -- # uname 00:06:43.377 19:27:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:43.377 19:27:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70045 00:06:43.377 19:27:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:43.377 19:27:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:43.377 killing process with pid 70045 00:06:43.377 19:27:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70045' 00:06:43.377 19:27:30 -- common/autotest_common.sh@955 -- # kill 70045 00:06:43.377 19:27:30 -- common/autotest_common.sh@960 -- # wait 70045 00:06:43.944 19:27:30 -- accel/accel.sh@68 -- # trap - ERR 00:06:43.944 19:27:30 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:43.944 19:27:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:43.944 19:27:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:43.944 19:27:30 -- common/autotest_common.sh@10 -- # set +x 00:06:43.944 19:27:30 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:06:43.944 19:27:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.944 19:27:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:43.944 19:27:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.944 19:27:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.944 19:27:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.944 19:27:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.944 19:27:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.944 19:27:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.944 19:27:30 -- accel/accel.sh@42 -- # jq -r . 00:06:43.944 19:27:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:43.944 19:27:30 -- common/autotest_common.sh@10 -- # set +x 00:06:43.944 19:27:30 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:43.944 19:27:30 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:43.944 19:27:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:43.944 19:27:30 -- common/autotest_common.sh@10 -- # set +x 00:06:43.944 ************************************ 00:06:43.944 START TEST accel_missing_filename 00:06:43.944 ************************************ 00:06:43.944 19:27:30 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:06:43.944 19:27:30 -- common/autotest_common.sh@650 -- # local es=0 00:06:43.944 19:27:30 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:43.944 19:27:30 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:43.944 19:27:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.944 19:27:30 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:43.944 19:27:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.944 19:27:30 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:06:43.944 19:27:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:43.944 19:27:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.944 19:27:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.944 19:27:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.944 19:27:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.944 19:27:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.944 19:27:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.944 19:27:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.944 19:27:30 -- accel/accel.sh@42 -- # jq -r . 00:06:43.945 [2024-12-15 19:27:30.760313] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:43.945 [2024-12-15 19:27:30.760431] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70122 ] 00:06:44.203 [2024-12-15 19:27:30.895476] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.203 [2024-12-15 19:27:30.951805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.203 [2024-12-15 19:27:31.023718] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:44.462 [2024-12-15 19:27:31.126333] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:44.462 A filename is required. 00:06:44.462 19:27:31 -- common/autotest_common.sh@653 -- # es=234 00:06:44.462 19:27:31 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:44.462 19:27:31 -- common/autotest_common.sh@662 -- # es=106 00:06:44.462 19:27:31 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:44.462 19:27:31 -- common/autotest_common.sh@670 -- # es=1 00:06:44.462 19:27:31 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:44.462 00:06:44.462 real 0m0.494s 00:06:44.462 user 0m0.311s 00:06:44.462 sys 0m0.130s 00:06:44.462 19:27:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:44.462 19:27:31 -- common/autotest_common.sh@10 -- # set +x 00:06:44.462 ************************************ 00:06:44.462 END TEST accel_missing_filename 00:06:44.462 ************************************ 00:06:44.462 19:27:31 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:44.462 19:27:31 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:44.462 19:27:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:44.462 19:27:31 -- common/autotest_common.sh@10 -- # set +x 00:06:44.462 ************************************ 00:06:44.462 START TEST accel_compress_verify 00:06:44.462 ************************************ 00:06:44.462 19:27:31 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:44.462 19:27:31 -- common/autotest_common.sh@650 -- # local es=0 00:06:44.462 19:27:31 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:44.462 19:27:31 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:44.462 19:27:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.462 19:27:31 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:44.462 19:27:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.462 19:27:31 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:44.462 19:27:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:44.462 19:27:31 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.462 19:27:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.462 19:27:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.462 19:27:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.462 19:27:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.462 19:27:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.462 19:27:31 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.462 19:27:31 -- accel/accel.sh@42 -- # jq -r . 00:06:44.462 [2024-12-15 19:27:31.299092] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:44.462 [2024-12-15 19:27:31.299168] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70141 ] 00:06:44.721 [2024-12-15 19:27:31.427588] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.721 [2024-12-15 19:27:31.487066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.721 [2024-12-15 19:27:31.557006] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:44.979 [2024-12-15 19:27:31.660336] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:44.979 00:06:44.979 Compression does not support the verify option, aborting. 00:06:44.979 19:27:31 -- common/autotest_common.sh@653 -- # es=161 00:06:44.979 19:27:31 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:44.979 19:27:31 -- common/autotest_common.sh@662 -- # es=33 00:06:44.979 19:27:31 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:44.979 19:27:31 -- common/autotest_common.sh@670 -- # es=1 00:06:44.979 19:27:31 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:44.979 00:06:44.979 real 0m0.487s 00:06:44.979 user 0m0.305s 00:06:44.979 sys 0m0.126s 00:06:44.979 19:27:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:44.979 ************************************ 00:06:44.979 END TEST accel_compress_verify 00:06:44.980 19:27:31 -- common/autotest_common.sh@10 -- # set +x 00:06:44.980 ************************************ 00:06:44.980 19:27:31 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:44.980 19:27:31 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:44.980 19:27:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:44.980 19:27:31 -- common/autotest_common.sh@10 -- # set +x 00:06:44.980 ************************************ 00:06:44.980 START TEST accel_wrong_workload 00:06:44.980 ************************************ 00:06:44.980 19:27:31 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:06:44.980 19:27:31 -- common/autotest_common.sh@650 -- # local es=0 00:06:44.980 19:27:31 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:44.980 19:27:31 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:44.980 19:27:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.980 19:27:31 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:44.980 19:27:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.980 19:27:31 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:06:44.980 19:27:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:44.980 19:27:31 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.980 19:27:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.980 19:27:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.980 19:27:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.980 19:27:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.980 19:27:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.980 19:27:31 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.980 19:27:31 -- accel/accel.sh@42 -- # jq -r . 00:06:44.980 Unsupported workload type: foobar 00:06:44.980 [2024-12-15 19:27:31.847313] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:44.980 accel_perf options: 00:06:44.980 [-h help message] 00:06:44.980 [-q queue depth per core] 00:06:44.980 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:44.980 [-T number of threads per core 00:06:44.980 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:44.980 [-t time in seconds] 00:06:44.980 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:44.980 [ dif_verify, , dif_generate, dif_generate_copy 00:06:44.980 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:44.980 [-l for compress/decompress workloads, name of uncompressed input file 00:06:44.980 [-S for crc32c workload, use this seed value (default 0) 00:06:44.980 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:44.980 [-f for fill workload, use this BYTE value (default 255) 00:06:44.980 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:44.980 [-y verify result if this switch is on] 00:06:44.980 [-a tasks to allocate per core (default: same value as -q)] 00:06:44.980 Can be used to spread operations across a wider range of memory. 00:06:44.980 19:27:31 -- common/autotest_common.sh@653 -- # es=1 00:06:44.980 19:27:31 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:44.980 19:27:31 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:44.980 19:27:31 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:44.980 00:06:44.980 real 0m0.032s 00:06:44.980 user 0m0.016s 00:06:44.980 sys 0m0.016s 00:06:44.980 19:27:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:44.980 ************************************ 00:06:44.980 END TEST accel_wrong_workload 00:06:44.980 ************************************ 00:06:44.980 19:27:31 -- common/autotest_common.sh@10 -- # set +x 00:06:45.239 19:27:31 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:45.239 19:27:31 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:45.239 19:27:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:45.239 19:27:31 -- common/autotest_common.sh@10 -- # set +x 00:06:45.239 ************************************ 00:06:45.239 START TEST accel_negative_buffers 00:06:45.239 ************************************ 00:06:45.239 19:27:31 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:45.239 19:27:31 -- common/autotest_common.sh@650 -- # local es=0 00:06:45.239 19:27:31 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:45.239 19:27:31 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:45.239 19:27:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.239 19:27:31 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:45.239 19:27:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.239 19:27:31 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:06:45.239 19:27:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:45.239 19:27:31 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.239 19:27:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.239 19:27:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.239 19:27:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.239 19:27:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.239 19:27:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.239 19:27:31 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.239 19:27:31 -- accel/accel.sh@42 -- # jq -r . 00:06:45.239 -x option must be non-negative. 00:06:45.239 [2024-12-15 19:27:31.931799] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:45.239 accel_perf options: 00:06:45.239 [-h help message] 00:06:45.239 [-q queue depth per core] 00:06:45.239 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:45.239 [-T number of threads per core 00:06:45.239 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:45.239 [-t time in seconds] 00:06:45.239 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:45.239 [ dif_verify, , dif_generate, dif_generate_copy 00:06:45.239 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:45.239 [-l for compress/decompress workloads, name of uncompressed input file 00:06:45.239 [-S for crc32c workload, use this seed value (default 0) 00:06:45.239 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:45.239 [-f for fill workload, use this BYTE value (default 255) 00:06:45.239 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:45.239 [-y verify result if this switch is on] 00:06:45.239 [-a tasks to allocate per core (default: same value as -q)] 00:06:45.239 Can be used to spread operations across a wider range of memory. 00:06:45.239 19:27:31 -- common/autotest_common.sh@653 -- # es=1 00:06:45.239 19:27:31 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:45.239 19:27:31 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:45.239 19:27:31 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:45.239 00:06:45.239 real 0m0.030s 00:06:45.239 user 0m0.014s 00:06:45.239 sys 0m0.016s 00:06:45.239 19:27:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:45.239 ************************************ 00:06:45.239 END TEST accel_negative_buffers 00:06:45.239 19:27:31 -- common/autotest_common.sh@10 -- # set +x 00:06:45.239 ************************************ 00:06:45.239 19:27:31 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:45.239 19:27:31 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:45.239 19:27:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:45.239 19:27:31 -- common/autotest_common.sh@10 -- # set +x 00:06:45.239 ************************************ 00:06:45.239 START TEST accel_crc32c 00:06:45.239 ************************************ 00:06:45.239 19:27:31 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:45.239 19:27:31 -- accel/accel.sh@16 -- # local accel_opc 00:06:45.239 19:27:31 -- accel/accel.sh@17 -- # local accel_module 00:06:45.239 19:27:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:45.239 19:27:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:45.239 19:27:31 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.239 19:27:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.239 19:27:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.239 19:27:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.239 19:27:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.239 19:27:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.239 19:27:31 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.239 19:27:31 -- accel/accel.sh@42 -- # jq -r . 00:06:45.239 [2024-12-15 19:27:32.012105] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:45.239 [2024-12-15 19:27:32.012194] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70205 ] 00:06:45.498 [2024-12-15 19:27:32.144650] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.499 [2024-12-15 19:27:32.202836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.873 19:27:33 -- accel/accel.sh@18 -- # out=' 00:06:46.873 SPDK Configuration: 00:06:46.873 Core mask: 0x1 00:06:46.873 00:06:46.873 Accel Perf Configuration: 00:06:46.873 Workload Type: crc32c 00:06:46.873 CRC-32C seed: 32 00:06:46.873 Transfer size: 4096 bytes 00:06:46.873 Vector count 1 00:06:46.873 Module: software 00:06:46.873 Queue depth: 32 00:06:46.873 Allocate depth: 32 00:06:46.873 # threads/core: 1 00:06:46.873 Run time: 1 seconds 00:06:46.873 Verify: Yes 00:06:46.873 00:06:46.873 Running for 1 seconds... 00:06:46.873 00:06:46.873 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:46.873 ------------------------------------------------------------------------------------ 00:06:46.873 0,0 577728/s 2256 MiB/s 0 0 00:06:46.873 ==================================================================================== 00:06:46.873 Total 577728/s 2256 MiB/s 0 0' 00:06:46.873 19:27:33 -- accel/accel.sh@20 -- # IFS=: 00:06:46.873 19:27:33 -- accel/accel.sh@20 -- # read -r var val 00:06:46.873 19:27:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:46.873 19:27:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:46.873 19:27:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.873 19:27:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.873 19:27:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.873 19:27:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.873 19:27:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.873 19:27:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.873 19:27:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.873 19:27:33 -- accel/accel.sh@42 -- # jq -r . 00:06:46.873 [2024-12-15 19:27:33.511004] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:46.873 [2024-12-15 19:27:33.511127] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70219 ] 00:06:46.873 [2024-12-15 19:27:33.647279] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.873 [2024-12-15 19:27:33.703026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.132 19:27:33 -- accel/accel.sh@21 -- # val= 00:06:47.132 19:27:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.132 19:27:33 -- accel/accel.sh@20 -- # IFS=: 00:06:47.132 19:27:33 -- accel/accel.sh@20 -- # read -r var val 00:06:47.132 19:27:33 -- accel/accel.sh@21 -- # val= 00:06:47.132 19:27:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.132 19:27:33 -- accel/accel.sh@20 -- # IFS=: 00:06:47.132 19:27:33 -- accel/accel.sh@20 -- # read -r var val 00:06:47.132 19:27:33 -- accel/accel.sh@21 -- # val=0x1 00:06:47.132 19:27:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.132 19:27:33 -- accel/accel.sh@20 -- # IFS=: 00:06:47.132 19:27:33 -- accel/accel.sh@20 -- # read -r var val 00:06:47.132 19:27:33 -- accel/accel.sh@21 -- # val= 00:06:47.132 19:27:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.132 19:27:33 -- accel/accel.sh@20 -- # IFS=: 00:06:47.132 19:27:33 -- accel/accel.sh@20 -- # read -r var val 00:06:47.132 19:27:33 -- accel/accel.sh@21 -- # val= 00:06:47.132 19:27:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.132 19:27:33 -- accel/accel.sh@20 -- # IFS=: 00:06:47.132 19:27:33 -- accel/accel.sh@20 -- # read -r var val 00:06:47.132 19:27:33 -- accel/accel.sh@21 -- # val=crc32c 00:06:47.132 19:27:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.132 19:27:33 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:47.132 19:27:33 -- accel/accel.sh@20 -- # IFS=: 00:06:47.132 19:27:33 -- accel/accel.sh@20 -- # read -r var val 00:06:47.132 19:27:33 -- accel/accel.sh@21 -- # val=32 00:06:47.132 19:27:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.132 19:27:33 -- accel/accel.sh@20 -- # IFS=: 00:06:47.132 19:27:33 -- accel/accel.sh@20 -- # read -r var val 00:06:47.132 19:27:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:47.132 19:27:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.132 19:27:33 -- accel/accel.sh@20 -- # IFS=: 00:06:47.132 19:27:33 -- accel/accel.sh@20 -- # read -r var val 00:06:47.132 19:27:33 -- accel/accel.sh@21 -- # val= 00:06:47.132 19:27:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.132 19:27:33 -- accel/accel.sh@20 -- # IFS=: 00:06:47.132 19:27:33 -- accel/accel.sh@20 -- # read -r var val 00:06:47.132 19:27:33 -- accel/accel.sh@21 -- # val=software 00:06:47.132 19:27:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.132 19:27:33 -- accel/accel.sh@23 -- # accel_module=software 00:06:47.132 19:27:33 -- accel/accel.sh@20 -- # IFS=: 00:06:47.132 19:27:33 -- accel/accel.sh@20 -- # read -r var val 00:06:47.132 19:27:33 -- accel/accel.sh@21 -- # val=32 00:06:47.132 19:27:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.132 19:27:33 -- accel/accel.sh@20 -- # IFS=: 00:06:47.132 19:27:33 -- accel/accel.sh@20 -- # read -r var val 00:06:47.132 19:27:33 -- accel/accel.sh@21 -- # val=32 00:06:47.132 19:27:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.132 19:27:33 -- accel/accel.sh@20 -- # IFS=: 00:06:47.132 19:27:33 -- accel/accel.sh@20 -- # read -r var val 00:06:47.132 19:27:33 -- accel/accel.sh@21 -- # val=1 00:06:47.132 19:27:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.132 19:27:33 -- accel/accel.sh@20 -- # IFS=: 00:06:47.132 19:27:33 -- accel/accel.sh@20 -- # read -r var val 00:06:47.132 19:27:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:47.132 19:27:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.132 19:27:33 -- accel/accel.sh@20 -- # IFS=: 00:06:47.132 19:27:33 -- accel/accel.sh@20 -- # read -r var val 00:06:47.132 19:27:33 -- accel/accel.sh@21 -- # val=Yes 00:06:47.132 19:27:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.132 19:27:33 -- accel/accel.sh@20 -- # IFS=: 00:06:47.132 19:27:33 -- accel/accel.sh@20 -- # read -r var val 00:06:47.132 19:27:33 -- accel/accel.sh@21 -- # val= 00:06:47.132 19:27:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.132 19:27:33 -- accel/accel.sh@20 -- # IFS=: 00:06:47.132 19:27:33 -- accel/accel.sh@20 -- # read -r var val 00:06:47.132 19:27:33 -- accel/accel.sh@21 -- # val= 00:06:47.132 19:27:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.132 19:27:33 -- accel/accel.sh@20 -- # IFS=: 00:06:47.132 19:27:33 -- accel/accel.sh@20 -- # read -r var val 00:06:48.068 19:27:34 -- accel/accel.sh@21 -- # val= 00:06:48.068 19:27:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.068 19:27:34 -- accel/accel.sh@20 -- # IFS=: 00:06:48.068 19:27:34 -- accel/accel.sh@20 -- # read -r var val 00:06:48.068 19:27:34 -- accel/accel.sh@21 -- # val= 00:06:48.068 19:27:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.069 19:27:34 -- accel/accel.sh@20 -- # IFS=: 00:06:48.069 19:27:34 -- accel/accel.sh@20 -- # read -r var val 00:06:48.069 19:27:34 -- accel/accel.sh@21 -- # val= 00:06:48.069 19:27:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.069 19:27:34 -- accel/accel.sh@20 -- # IFS=: 00:06:48.069 19:27:34 -- accel/accel.sh@20 -- # read -r var val 00:06:48.069 19:27:34 -- accel/accel.sh@21 -- # val= 00:06:48.069 19:27:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.069 19:27:34 -- accel/accel.sh@20 -- # IFS=: 00:06:48.069 19:27:34 -- accel/accel.sh@20 -- # read -r var val 00:06:48.069 19:27:34 -- accel/accel.sh@21 -- # val= 00:06:48.069 19:27:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.069 19:27:34 -- accel/accel.sh@20 -- # IFS=: 00:06:48.069 19:27:34 -- accel/accel.sh@20 -- # read -r var val 00:06:48.069 19:27:34 -- accel/accel.sh@21 -- # val= 00:06:48.069 19:27:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.069 19:27:34 -- accel/accel.sh@20 -- # IFS=: 00:06:48.069 19:27:34 -- accel/accel.sh@20 -- # read -r var val 00:06:48.069 19:27:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:48.069 19:27:34 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:48.069 19:27:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.069 00:06:48.069 real 0m2.966s 00:06:48.069 user 0m2.503s 00:06:48.069 sys 0m0.261s 00:06:48.069 19:27:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:48.069 19:27:34 -- common/autotest_common.sh@10 -- # set +x 00:06:48.069 ************************************ 00:06:48.069 END TEST accel_crc32c 00:06:48.069 ************************************ 00:06:48.328 19:27:34 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:48.328 19:27:34 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:48.328 19:27:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:48.328 19:27:35 -- common/autotest_common.sh@10 -- # set +x 00:06:48.328 ************************************ 00:06:48.328 START TEST accel_crc32c_C2 00:06:48.328 ************************************ 00:06:48.328 19:27:35 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:48.328 19:27:35 -- accel/accel.sh@16 -- # local accel_opc 00:06:48.328 19:27:35 -- accel/accel.sh@17 -- # local accel_module 00:06:48.328 19:27:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:48.328 19:27:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:48.328 19:27:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.328 19:27:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.328 19:27:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.328 19:27:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.328 19:27:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.328 19:27:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.328 19:27:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.328 19:27:35 -- accel/accel.sh@42 -- # jq -r . 00:06:48.328 [2024-12-15 19:27:35.032736] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:48.328 [2024-12-15 19:27:35.032809] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70259 ] 00:06:48.328 [2024-12-15 19:27:35.161859] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.586 [2024-12-15 19:27:35.222720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.000 19:27:36 -- accel/accel.sh@18 -- # out=' 00:06:50.000 SPDK Configuration: 00:06:50.000 Core mask: 0x1 00:06:50.000 00:06:50.000 Accel Perf Configuration: 00:06:50.000 Workload Type: crc32c 00:06:50.000 CRC-32C seed: 0 00:06:50.000 Transfer size: 4096 bytes 00:06:50.000 Vector count 2 00:06:50.000 Module: software 00:06:50.000 Queue depth: 32 00:06:50.000 Allocate depth: 32 00:06:50.000 # threads/core: 1 00:06:50.000 Run time: 1 seconds 00:06:50.000 Verify: Yes 00:06:50.000 00:06:50.000 Running for 1 seconds... 00:06:50.000 00:06:50.000 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:50.000 ------------------------------------------------------------------------------------ 00:06:50.000 0,0 439392/s 3432 MiB/s 0 0 00:06:50.000 ==================================================================================== 00:06:50.000 Total 439392/s 1716 MiB/s 0 0' 00:06:50.000 19:27:36 -- accel/accel.sh@20 -- # IFS=: 00:06:50.000 19:27:36 -- accel/accel.sh@20 -- # read -r var val 00:06:50.000 19:27:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:50.000 19:27:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:50.000 19:27:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.000 19:27:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.000 19:27:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.000 19:27:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.000 19:27:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.000 19:27:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.000 19:27:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.000 19:27:36 -- accel/accel.sh@42 -- # jq -r . 00:06:50.000 [2024-12-15 19:27:36.526230] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:50.000 [2024-12-15 19:27:36.526358] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70273 ] 00:06:50.000 [2024-12-15 19:27:36.660128] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.000 [2024-12-15 19:27:36.716396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.000 19:27:36 -- accel/accel.sh@21 -- # val= 00:06:50.000 19:27:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.000 19:27:36 -- accel/accel.sh@20 -- # IFS=: 00:06:50.000 19:27:36 -- accel/accel.sh@20 -- # read -r var val 00:06:50.000 19:27:36 -- accel/accel.sh@21 -- # val= 00:06:50.000 19:27:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.000 19:27:36 -- accel/accel.sh@20 -- # IFS=: 00:06:50.000 19:27:36 -- accel/accel.sh@20 -- # read -r var val 00:06:50.000 19:27:36 -- accel/accel.sh@21 -- # val=0x1 00:06:50.000 19:27:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.000 19:27:36 -- accel/accel.sh@20 -- # IFS=: 00:06:50.000 19:27:36 -- accel/accel.sh@20 -- # read -r var val 00:06:50.000 19:27:36 -- accel/accel.sh@21 -- # val= 00:06:50.000 19:27:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.000 19:27:36 -- accel/accel.sh@20 -- # IFS=: 00:06:50.000 19:27:36 -- accel/accel.sh@20 -- # read -r var val 00:06:50.000 19:27:36 -- accel/accel.sh@21 -- # val= 00:06:50.000 19:27:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.000 19:27:36 -- accel/accel.sh@20 -- # IFS=: 00:06:50.000 19:27:36 -- accel/accel.sh@20 -- # read -r var val 00:06:50.000 19:27:36 -- accel/accel.sh@21 -- # val=crc32c 00:06:50.000 19:27:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.000 19:27:36 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:50.000 19:27:36 -- accel/accel.sh@20 -- # IFS=: 00:06:50.000 19:27:36 -- accel/accel.sh@20 -- # read -r var val 00:06:50.000 19:27:36 -- accel/accel.sh@21 -- # val=0 00:06:50.000 19:27:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.000 19:27:36 -- accel/accel.sh@20 -- # IFS=: 00:06:50.000 19:27:36 -- accel/accel.sh@20 -- # read -r var val 00:06:50.000 19:27:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:50.000 19:27:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.000 19:27:36 -- accel/accel.sh@20 -- # IFS=: 00:06:50.000 19:27:36 -- accel/accel.sh@20 -- # read -r var val 00:06:50.000 19:27:36 -- accel/accel.sh@21 -- # val= 00:06:50.000 19:27:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.000 19:27:36 -- accel/accel.sh@20 -- # IFS=: 00:06:50.000 19:27:36 -- accel/accel.sh@20 -- # read -r var val 00:06:50.000 19:27:36 -- accel/accel.sh@21 -- # val=software 00:06:50.000 19:27:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.000 19:27:36 -- accel/accel.sh@23 -- # accel_module=software 00:06:50.000 19:27:36 -- accel/accel.sh@20 -- # IFS=: 00:06:50.000 19:27:36 -- accel/accel.sh@20 -- # read -r var val 00:06:50.000 19:27:36 -- accel/accel.sh@21 -- # val=32 00:06:50.000 19:27:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.000 19:27:36 -- accel/accel.sh@20 -- # IFS=: 00:06:50.000 19:27:36 -- accel/accel.sh@20 -- # read -r var val 00:06:50.000 19:27:36 -- accel/accel.sh@21 -- # val=32 00:06:50.000 19:27:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.000 19:27:36 -- accel/accel.sh@20 -- # IFS=: 00:06:50.000 19:27:36 -- accel/accel.sh@20 -- # read -r var val 00:06:50.000 19:27:36 -- accel/accel.sh@21 -- # val=1 00:06:50.000 19:27:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.000 19:27:36 -- accel/accel.sh@20 -- # IFS=: 00:06:50.000 19:27:36 -- accel/accel.sh@20 -- # read -r var val 00:06:50.000 19:27:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:50.000 19:27:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.000 19:27:36 -- accel/accel.sh@20 -- # IFS=: 00:06:50.000 19:27:36 -- accel/accel.sh@20 -- # read -r var val 00:06:50.000 19:27:36 -- accel/accel.sh@21 -- # val=Yes 00:06:50.000 19:27:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.000 19:27:36 -- accel/accel.sh@20 -- # IFS=: 00:06:50.000 19:27:36 -- accel/accel.sh@20 -- # read -r var val 00:06:50.000 19:27:36 -- accel/accel.sh@21 -- # val= 00:06:50.000 19:27:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.000 19:27:36 -- accel/accel.sh@20 -- # IFS=: 00:06:50.000 19:27:36 -- accel/accel.sh@20 -- # read -r var val 00:06:50.000 19:27:36 -- accel/accel.sh@21 -- # val= 00:06:50.001 19:27:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.001 19:27:36 -- accel/accel.sh@20 -- # IFS=: 00:06:50.001 19:27:36 -- accel/accel.sh@20 -- # read -r var val 00:06:51.376 19:27:37 -- accel/accel.sh@21 -- # val= 00:06:51.377 19:27:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.377 19:27:37 -- accel/accel.sh@20 -- # IFS=: 00:06:51.377 19:27:37 -- accel/accel.sh@20 -- # read -r var val 00:06:51.377 19:27:37 -- accel/accel.sh@21 -- # val= 00:06:51.377 19:27:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.377 19:27:37 -- accel/accel.sh@20 -- # IFS=: 00:06:51.377 19:27:37 -- accel/accel.sh@20 -- # read -r var val 00:06:51.377 19:27:37 -- accel/accel.sh@21 -- # val= 00:06:51.377 19:27:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.377 19:27:37 -- accel/accel.sh@20 -- # IFS=: 00:06:51.377 19:27:37 -- accel/accel.sh@20 -- # read -r var val 00:06:51.377 19:27:37 -- accel/accel.sh@21 -- # val= 00:06:51.377 19:27:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.377 19:27:37 -- accel/accel.sh@20 -- # IFS=: 00:06:51.377 19:27:37 -- accel/accel.sh@20 -- # read -r var val 00:06:51.377 19:27:37 -- accel/accel.sh@21 -- # val= 00:06:51.377 19:27:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.377 19:27:37 -- accel/accel.sh@20 -- # IFS=: 00:06:51.377 19:27:37 -- accel/accel.sh@20 -- # read -r var val 00:06:51.377 19:27:37 -- accel/accel.sh@21 -- # val= 00:06:51.377 19:27:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.377 19:27:37 -- accel/accel.sh@20 -- # IFS=: 00:06:51.377 19:27:37 -- accel/accel.sh@20 -- # read -r var val 00:06:51.377 19:27:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:51.377 19:27:37 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:51.377 19:27:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.377 00:06:51.377 real 0m2.988s 00:06:51.377 user 0m2.522s 00:06:51.377 sys 0m0.263s 00:06:51.377 19:27:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:51.377 19:27:37 -- common/autotest_common.sh@10 -- # set +x 00:06:51.377 ************************************ 00:06:51.377 END TEST accel_crc32c_C2 00:06:51.377 ************************************ 00:06:51.377 19:27:38 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:51.377 19:27:38 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:51.377 19:27:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:51.377 19:27:38 -- common/autotest_common.sh@10 -- # set +x 00:06:51.377 ************************************ 00:06:51.377 START TEST accel_copy 00:06:51.377 ************************************ 00:06:51.377 19:27:38 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:06:51.377 19:27:38 -- accel/accel.sh@16 -- # local accel_opc 00:06:51.377 19:27:38 -- accel/accel.sh@17 -- # local accel_module 00:06:51.377 19:27:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:51.377 19:27:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:51.377 19:27:38 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.377 19:27:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.377 19:27:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.377 19:27:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.377 19:27:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.377 19:27:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.377 19:27:38 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.377 19:27:38 -- accel/accel.sh@42 -- # jq -r . 00:06:51.377 [2024-12-15 19:27:38.076934] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:51.377 [2024-12-15 19:27:38.077040] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70313 ] 00:06:51.377 [2024-12-15 19:27:38.210980] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.377 [2024-12-15 19:27:38.267964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.753 19:27:39 -- accel/accel.sh@18 -- # out=' 00:06:52.753 SPDK Configuration: 00:06:52.753 Core mask: 0x1 00:06:52.753 00:06:52.753 Accel Perf Configuration: 00:06:52.753 Workload Type: copy 00:06:52.753 Transfer size: 4096 bytes 00:06:52.753 Vector count 1 00:06:52.753 Module: software 00:06:52.753 Queue depth: 32 00:06:52.753 Allocate depth: 32 00:06:52.753 # threads/core: 1 00:06:52.753 Run time: 1 seconds 00:06:52.753 Verify: Yes 00:06:52.753 00:06:52.753 Running for 1 seconds... 00:06:52.753 00:06:52.753 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:52.753 ------------------------------------------------------------------------------------ 00:06:52.753 0,0 397920/s 1554 MiB/s 0 0 00:06:52.753 ==================================================================================== 00:06:52.753 Total 397920/s 1554 MiB/s 0 0' 00:06:52.753 19:27:39 -- accel/accel.sh@20 -- # IFS=: 00:06:52.753 19:27:39 -- accel/accel.sh@20 -- # read -r var val 00:06:52.753 19:27:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:52.753 19:27:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:52.753 19:27:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.753 19:27:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.753 19:27:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.753 19:27:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.753 19:27:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.753 19:27:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.753 19:27:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.753 19:27:39 -- accel/accel.sh@42 -- # jq -r . 00:06:52.753 [2024-12-15 19:27:39.538357] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:52.753 [2024-12-15 19:27:39.538617] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70327 ] 00:06:53.012 [2024-12-15 19:27:39.671967] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.012 [2024-12-15 19:27:39.726848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.012 19:27:39 -- accel/accel.sh@21 -- # val= 00:06:53.012 19:27:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.012 19:27:39 -- accel/accel.sh@20 -- # IFS=: 00:06:53.012 19:27:39 -- accel/accel.sh@20 -- # read -r var val 00:06:53.012 19:27:39 -- accel/accel.sh@21 -- # val= 00:06:53.012 19:27:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.012 19:27:39 -- accel/accel.sh@20 -- # IFS=: 00:06:53.012 19:27:39 -- accel/accel.sh@20 -- # read -r var val 00:06:53.012 19:27:39 -- accel/accel.sh@21 -- # val=0x1 00:06:53.012 19:27:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.012 19:27:39 -- accel/accel.sh@20 -- # IFS=: 00:06:53.012 19:27:39 -- accel/accel.sh@20 -- # read -r var val 00:06:53.012 19:27:39 -- accel/accel.sh@21 -- # val= 00:06:53.012 19:27:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.012 19:27:39 -- accel/accel.sh@20 -- # IFS=: 00:06:53.012 19:27:39 -- accel/accel.sh@20 -- # read -r var val 00:06:53.012 19:27:39 -- accel/accel.sh@21 -- # val= 00:06:53.012 19:27:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.012 19:27:39 -- accel/accel.sh@20 -- # IFS=: 00:06:53.012 19:27:39 -- accel/accel.sh@20 -- # read -r var val 00:06:53.012 19:27:39 -- accel/accel.sh@21 -- # val=copy 00:06:53.012 19:27:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.012 19:27:39 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:53.012 19:27:39 -- accel/accel.sh@20 -- # IFS=: 00:06:53.012 19:27:39 -- accel/accel.sh@20 -- # read -r var val 00:06:53.012 19:27:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:53.012 19:27:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.012 19:27:39 -- accel/accel.sh@20 -- # IFS=: 00:06:53.012 19:27:39 -- accel/accel.sh@20 -- # read -r var val 00:06:53.012 19:27:39 -- accel/accel.sh@21 -- # val= 00:06:53.012 19:27:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.012 19:27:39 -- accel/accel.sh@20 -- # IFS=: 00:06:53.012 19:27:39 -- accel/accel.sh@20 -- # read -r var val 00:06:53.012 19:27:39 -- accel/accel.sh@21 -- # val=software 00:06:53.012 19:27:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.012 19:27:39 -- accel/accel.sh@23 -- # accel_module=software 00:06:53.012 19:27:39 -- accel/accel.sh@20 -- # IFS=: 00:06:53.012 19:27:39 -- accel/accel.sh@20 -- # read -r var val 00:06:53.012 19:27:39 -- accel/accel.sh@21 -- # val=32 00:06:53.012 19:27:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.013 19:27:39 -- accel/accel.sh@20 -- # IFS=: 00:06:53.013 19:27:39 -- accel/accel.sh@20 -- # read -r var val 00:06:53.013 19:27:39 -- accel/accel.sh@21 -- # val=32 00:06:53.013 19:27:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.013 19:27:39 -- accel/accel.sh@20 -- # IFS=: 00:06:53.013 19:27:39 -- accel/accel.sh@20 -- # read -r var val 00:06:53.013 19:27:39 -- accel/accel.sh@21 -- # val=1 00:06:53.013 19:27:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.013 19:27:39 -- accel/accel.sh@20 -- # IFS=: 00:06:53.013 19:27:39 -- accel/accel.sh@20 -- # read -r var val 00:06:53.013 19:27:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:53.013 19:27:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.013 19:27:39 -- accel/accel.sh@20 -- # IFS=: 00:06:53.013 19:27:39 -- accel/accel.sh@20 -- # read -r var val 00:06:53.013 19:27:39 -- accel/accel.sh@21 -- # val=Yes 00:06:53.013 19:27:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.013 19:27:39 -- accel/accel.sh@20 -- # IFS=: 00:06:53.013 19:27:39 -- accel/accel.sh@20 -- # read -r var val 00:06:53.013 19:27:39 -- accel/accel.sh@21 -- # val= 00:06:53.013 19:27:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.013 19:27:39 -- accel/accel.sh@20 -- # IFS=: 00:06:53.013 19:27:39 -- accel/accel.sh@20 -- # read -r var val 00:06:53.013 19:27:39 -- accel/accel.sh@21 -- # val= 00:06:53.013 19:27:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.013 19:27:39 -- accel/accel.sh@20 -- # IFS=: 00:06:53.013 19:27:39 -- accel/accel.sh@20 -- # read -r var val 00:06:54.389 19:27:40 -- accel/accel.sh@21 -- # val= 00:06:54.389 19:27:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.389 19:27:40 -- accel/accel.sh@20 -- # IFS=: 00:06:54.389 19:27:40 -- accel/accel.sh@20 -- # read -r var val 00:06:54.389 19:27:40 -- accel/accel.sh@21 -- # val= 00:06:54.389 19:27:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.389 19:27:40 -- accel/accel.sh@20 -- # IFS=: 00:06:54.389 19:27:40 -- accel/accel.sh@20 -- # read -r var val 00:06:54.389 19:27:40 -- accel/accel.sh@21 -- # val= 00:06:54.389 19:27:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.389 19:27:40 -- accel/accel.sh@20 -- # IFS=: 00:06:54.389 19:27:40 -- accel/accel.sh@20 -- # read -r var val 00:06:54.389 19:27:40 -- accel/accel.sh@21 -- # val= 00:06:54.389 19:27:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.389 19:27:40 -- accel/accel.sh@20 -- # IFS=: 00:06:54.389 19:27:40 -- accel/accel.sh@20 -- # read -r var val 00:06:54.389 19:27:40 -- accel/accel.sh@21 -- # val= 00:06:54.389 19:27:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.390 19:27:40 -- accel/accel.sh@20 -- # IFS=: 00:06:54.390 19:27:40 -- accel/accel.sh@20 -- # read -r var val 00:06:54.390 19:27:40 -- accel/accel.sh@21 -- # val= 00:06:54.390 19:27:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.390 19:27:40 -- accel/accel.sh@20 -- # IFS=: 00:06:54.390 19:27:40 -- accel/accel.sh@20 -- # read -r var val 00:06:54.390 19:27:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:54.390 19:27:40 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:54.390 ************************************ 00:06:54.390 END TEST accel_copy 00:06:54.390 ************************************ 00:06:54.390 19:27:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.390 00:06:54.390 real 0m2.921s 00:06:54.390 user 0m2.460s 00:06:54.390 sys 0m0.259s 00:06:54.390 19:27:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:54.390 19:27:40 -- common/autotest_common.sh@10 -- # set +x 00:06:54.390 19:27:41 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:54.390 19:27:41 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:54.390 19:27:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:54.390 19:27:41 -- common/autotest_common.sh@10 -- # set +x 00:06:54.390 ************************************ 00:06:54.390 START TEST accel_fill 00:06:54.390 ************************************ 00:06:54.390 19:27:41 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:54.390 19:27:41 -- accel/accel.sh@16 -- # local accel_opc 00:06:54.390 19:27:41 -- accel/accel.sh@17 -- # local accel_module 00:06:54.390 19:27:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:54.390 19:27:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:54.390 19:27:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.390 19:27:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.390 19:27:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.390 19:27:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.390 19:27:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.390 19:27:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.390 19:27:41 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.390 19:27:41 -- accel/accel.sh@42 -- # jq -r . 00:06:54.390 [2024-12-15 19:27:41.053157] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:54.390 [2024-12-15 19:27:41.053289] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70364 ] 00:06:54.390 [2024-12-15 19:27:41.191269] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.390 [2024-12-15 19:27:41.250321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.767 19:27:42 -- accel/accel.sh@18 -- # out=' 00:06:55.767 SPDK Configuration: 00:06:55.767 Core mask: 0x1 00:06:55.767 00:06:55.767 Accel Perf Configuration: 00:06:55.767 Workload Type: fill 00:06:55.767 Fill pattern: 0x80 00:06:55.767 Transfer size: 4096 bytes 00:06:55.767 Vector count 1 00:06:55.767 Module: software 00:06:55.767 Queue depth: 64 00:06:55.767 Allocate depth: 64 00:06:55.767 # threads/core: 1 00:06:55.767 Run time: 1 seconds 00:06:55.767 Verify: Yes 00:06:55.767 00:06:55.767 Running for 1 seconds... 00:06:55.767 00:06:55.767 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:55.767 ------------------------------------------------------------------------------------ 00:06:55.767 0,0 582336/s 2274 MiB/s 0 0 00:06:55.767 ==================================================================================== 00:06:55.767 Total 582336/s 2274 MiB/s 0 0' 00:06:55.767 19:27:42 -- accel/accel.sh@20 -- # IFS=: 00:06:55.767 19:27:42 -- accel/accel.sh@20 -- # read -r var val 00:06:55.767 19:27:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:55.767 19:27:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:55.767 19:27:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.767 19:27:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.767 19:27:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.767 19:27:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.767 19:27:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.767 19:27:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.767 19:27:42 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.767 19:27:42 -- accel/accel.sh@42 -- # jq -r . 00:06:55.767 [2024-12-15 19:27:42.523983] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:55.767 [2024-12-15 19:27:42.524778] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70381 ] 00:06:55.767 [2024-12-15 19:27:42.661737] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.026 [2024-12-15 19:27:42.720808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.026 19:27:42 -- accel/accel.sh@21 -- # val= 00:06:56.026 19:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.026 19:27:42 -- accel/accel.sh@20 -- # IFS=: 00:06:56.026 19:27:42 -- accel/accel.sh@20 -- # read -r var val 00:06:56.026 19:27:42 -- accel/accel.sh@21 -- # val= 00:06:56.026 19:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.026 19:27:42 -- accel/accel.sh@20 -- # IFS=: 00:06:56.026 19:27:42 -- accel/accel.sh@20 -- # read -r var val 00:06:56.026 19:27:42 -- accel/accel.sh@21 -- # val=0x1 00:06:56.026 19:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.026 19:27:42 -- accel/accel.sh@20 -- # IFS=: 00:06:56.026 19:27:42 -- accel/accel.sh@20 -- # read -r var val 00:06:56.026 19:27:42 -- accel/accel.sh@21 -- # val= 00:06:56.026 19:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.026 19:27:42 -- accel/accel.sh@20 -- # IFS=: 00:06:56.026 19:27:42 -- accel/accel.sh@20 -- # read -r var val 00:06:56.026 19:27:42 -- accel/accel.sh@21 -- # val= 00:06:56.026 19:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.026 19:27:42 -- accel/accel.sh@20 -- # IFS=: 00:06:56.026 19:27:42 -- accel/accel.sh@20 -- # read -r var val 00:06:56.026 19:27:42 -- accel/accel.sh@21 -- # val=fill 00:06:56.026 19:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.026 19:27:42 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:56.026 19:27:42 -- accel/accel.sh@20 -- # IFS=: 00:06:56.026 19:27:42 -- accel/accel.sh@20 -- # read -r var val 00:06:56.026 19:27:42 -- accel/accel.sh@21 -- # val=0x80 00:06:56.026 19:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.026 19:27:42 -- accel/accel.sh@20 -- # IFS=: 00:06:56.026 19:27:42 -- accel/accel.sh@20 -- # read -r var val 00:06:56.026 19:27:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:56.026 19:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.026 19:27:42 -- accel/accel.sh@20 -- # IFS=: 00:06:56.026 19:27:42 -- accel/accel.sh@20 -- # read -r var val 00:06:56.026 19:27:42 -- accel/accel.sh@21 -- # val= 00:06:56.026 19:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.026 19:27:42 -- accel/accel.sh@20 -- # IFS=: 00:06:56.026 19:27:42 -- accel/accel.sh@20 -- # read -r var val 00:06:56.026 19:27:42 -- accel/accel.sh@21 -- # val=software 00:06:56.026 19:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.026 19:27:42 -- accel/accel.sh@23 -- # accel_module=software 00:06:56.026 19:27:42 -- accel/accel.sh@20 -- # IFS=: 00:06:56.026 19:27:42 -- accel/accel.sh@20 -- # read -r var val 00:06:56.026 19:27:42 -- accel/accel.sh@21 -- # val=64 00:06:56.026 19:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.026 19:27:42 -- accel/accel.sh@20 -- # IFS=: 00:06:56.026 19:27:42 -- accel/accel.sh@20 -- # read -r var val 00:06:56.026 19:27:42 -- accel/accel.sh@21 -- # val=64 00:06:56.026 19:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.026 19:27:42 -- accel/accel.sh@20 -- # IFS=: 00:06:56.026 19:27:42 -- accel/accel.sh@20 -- # read -r var val 00:06:56.026 19:27:42 -- accel/accel.sh@21 -- # val=1 00:06:56.026 19:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.026 19:27:42 -- accel/accel.sh@20 -- # IFS=: 00:06:56.026 19:27:42 -- accel/accel.sh@20 -- # read -r var val 00:06:56.026 19:27:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:56.026 19:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.026 19:27:42 -- accel/accel.sh@20 -- # IFS=: 00:06:56.026 19:27:42 -- accel/accel.sh@20 -- # read -r var val 00:06:56.026 19:27:42 -- accel/accel.sh@21 -- # val=Yes 00:06:56.026 19:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.026 19:27:42 -- accel/accel.sh@20 -- # IFS=: 00:06:56.026 19:27:42 -- accel/accel.sh@20 -- # read -r var val 00:06:56.026 19:27:42 -- accel/accel.sh@21 -- # val= 00:06:56.026 19:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.026 19:27:42 -- accel/accel.sh@20 -- # IFS=: 00:06:56.026 19:27:42 -- accel/accel.sh@20 -- # read -r var val 00:06:56.026 19:27:42 -- accel/accel.sh@21 -- # val= 00:06:56.026 19:27:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.026 19:27:42 -- accel/accel.sh@20 -- # IFS=: 00:06:56.026 19:27:42 -- accel/accel.sh@20 -- # read -r var val 00:06:57.403 19:27:43 -- accel/accel.sh@21 -- # val= 00:06:57.403 19:27:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.403 19:27:43 -- accel/accel.sh@20 -- # IFS=: 00:06:57.403 19:27:43 -- accel/accel.sh@20 -- # read -r var val 00:06:57.403 19:27:43 -- accel/accel.sh@21 -- # val= 00:06:57.403 19:27:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.403 19:27:43 -- accel/accel.sh@20 -- # IFS=: 00:06:57.403 19:27:43 -- accel/accel.sh@20 -- # read -r var val 00:06:57.403 19:27:43 -- accel/accel.sh@21 -- # val= 00:06:57.403 19:27:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.403 19:27:43 -- accel/accel.sh@20 -- # IFS=: 00:06:57.403 19:27:43 -- accel/accel.sh@20 -- # read -r var val 00:06:57.403 19:27:43 -- accel/accel.sh@21 -- # val= 00:06:57.403 19:27:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.403 19:27:43 -- accel/accel.sh@20 -- # IFS=: 00:06:57.403 19:27:43 -- accel/accel.sh@20 -- # read -r var val 00:06:57.403 19:27:43 -- accel/accel.sh@21 -- # val= 00:06:57.403 19:27:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.403 19:27:43 -- accel/accel.sh@20 -- # IFS=: 00:06:57.403 19:27:43 -- accel/accel.sh@20 -- # read -r var val 00:06:57.403 19:27:43 -- accel/accel.sh@21 -- # val= 00:06:57.403 19:27:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.403 19:27:43 -- accel/accel.sh@20 -- # IFS=: 00:06:57.403 19:27:43 -- accel/accel.sh@20 -- # read -r var val 00:06:57.403 19:27:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:57.403 19:27:43 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:57.403 19:27:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.403 00:06:57.403 real 0m2.943s 00:06:57.403 user 0m2.464s 00:06:57.403 sys 0m0.274s 00:06:57.403 19:27:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:57.403 19:27:43 -- common/autotest_common.sh@10 -- # set +x 00:06:57.403 ************************************ 00:06:57.403 END TEST accel_fill 00:06:57.403 ************************************ 00:06:57.403 19:27:44 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:57.403 19:27:44 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:57.403 19:27:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:57.403 19:27:44 -- common/autotest_common.sh@10 -- # set +x 00:06:57.403 ************************************ 00:06:57.403 START TEST accel_copy_crc32c 00:06:57.403 ************************************ 00:06:57.403 19:27:44 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:06:57.403 19:27:44 -- accel/accel.sh@16 -- # local accel_opc 00:06:57.403 19:27:44 -- accel/accel.sh@17 -- # local accel_module 00:06:57.403 19:27:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:57.403 19:27:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:57.403 19:27:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.403 19:27:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.403 19:27:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.403 19:27:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.403 19:27:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.403 19:27:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.403 19:27:44 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.403 19:27:44 -- accel/accel.sh@42 -- # jq -r . 00:06:57.403 [2024-12-15 19:27:44.045744] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:57.404 [2024-12-15 19:27:44.046505] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70416 ] 00:06:57.404 [2024-12-15 19:27:44.175640] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.404 [2024-12-15 19:27:44.238187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.782 19:27:45 -- accel/accel.sh@18 -- # out=' 00:06:58.782 SPDK Configuration: 00:06:58.782 Core mask: 0x1 00:06:58.782 00:06:58.782 Accel Perf Configuration: 00:06:58.782 Workload Type: copy_crc32c 00:06:58.782 CRC-32C seed: 0 00:06:58.782 Vector size: 4096 bytes 00:06:58.782 Transfer size: 4096 bytes 00:06:58.782 Vector count 1 00:06:58.782 Module: software 00:06:58.782 Queue depth: 32 00:06:58.782 Allocate depth: 32 00:06:58.782 # threads/core: 1 00:06:58.782 Run time: 1 seconds 00:06:58.782 Verify: Yes 00:06:58.782 00:06:58.782 Running for 1 seconds... 00:06:58.782 00:06:58.782 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:58.782 ------------------------------------------------------------------------------------ 00:06:58.782 0,0 312896/s 1222 MiB/s 0 0 00:06:58.782 ==================================================================================== 00:06:58.782 Total 312896/s 1222 MiB/s 0 0' 00:06:58.782 19:27:45 -- accel/accel.sh@20 -- # IFS=: 00:06:58.782 19:27:45 -- accel/accel.sh@20 -- # read -r var val 00:06:58.782 19:27:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:58.782 19:27:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:58.782 19:27:45 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.782 19:27:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.782 19:27:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.782 19:27:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.782 19:27:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.782 19:27:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.782 19:27:45 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.782 19:27:45 -- accel/accel.sh@42 -- # jq -r . 00:06:58.782 [2024-12-15 19:27:45.542683] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:58.782 [2024-12-15 19:27:45.542760] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70437 ] 00:06:58.782 [2024-12-15 19:27:45.672006] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.041 [2024-12-15 19:27:45.738720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.041 19:27:45 -- accel/accel.sh@21 -- # val= 00:06:59.041 19:27:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.041 19:27:45 -- accel/accel.sh@20 -- # IFS=: 00:06:59.041 19:27:45 -- accel/accel.sh@20 -- # read -r var val 00:06:59.041 19:27:45 -- accel/accel.sh@21 -- # val= 00:06:59.041 19:27:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.041 19:27:45 -- accel/accel.sh@20 -- # IFS=: 00:06:59.041 19:27:45 -- accel/accel.sh@20 -- # read -r var val 00:06:59.041 19:27:45 -- accel/accel.sh@21 -- # val=0x1 00:06:59.041 19:27:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.041 19:27:45 -- accel/accel.sh@20 -- # IFS=: 00:06:59.041 19:27:45 -- accel/accel.sh@20 -- # read -r var val 00:06:59.041 19:27:45 -- accel/accel.sh@21 -- # val= 00:06:59.041 19:27:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.041 19:27:45 -- accel/accel.sh@20 -- # IFS=: 00:06:59.041 19:27:45 -- accel/accel.sh@20 -- # read -r var val 00:06:59.041 19:27:45 -- accel/accel.sh@21 -- # val= 00:06:59.041 19:27:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.041 19:27:45 -- accel/accel.sh@20 -- # IFS=: 00:06:59.041 19:27:45 -- accel/accel.sh@20 -- # read -r var val 00:06:59.041 19:27:45 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:59.041 19:27:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.041 19:27:45 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:59.041 19:27:45 -- accel/accel.sh@20 -- # IFS=: 00:06:59.041 19:27:45 -- accel/accel.sh@20 -- # read -r var val 00:06:59.041 19:27:45 -- accel/accel.sh@21 -- # val=0 00:06:59.041 19:27:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.041 19:27:45 -- accel/accel.sh@20 -- # IFS=: 00:06:59.041 19:27:45 -- accel/accel.sh@20 -- # read -r var val 00:06:59.041 19:27:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:59.041 19:27:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.041 19:27:45 -- accel/accel.sh@20 -- # IFS=: 00:06:59.041 19:27:45 -- accel/accel.sh@20 -- # read -r var val 00:06:59.041 19:27:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:59.041 19:27:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.041 19:27:45 -- accel/accel.sh@20 -- # IFS=: 00:06:59.041 19:27:45 -- accel/accel.sh@20 -- # read -r var val 00:06:59.041 19:27:45 -- accel/accel.sh@21 -- # val= 00:06:59.041 19:27:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.041 19:27:45 -- accel/accel.sh@20 -- # IFS=: 00:06:59.041 19:27:45 -- accel/accel.sh@20 -- # read -r var val 00:06:59.041 19:27:45 -- accel/accel.sh@21 -- # val=software 00:06:59.041 19:27:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.041 19:27:45 -- accel/accel.sh@23 -- # accel_module=software 00:06:59.041 19:27:45 -- accel/accel.sh@20 -- # IFS=: 00:06:59.041 19:27:45 -- accel/accel.sh@20 -- # read -r var val 00:06:59.041 19:27:45 -- accel/accel.sh@21 -- # val=32 00:06:59.041 19:27:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.041 19:27:45 -- accel/accel.sh@20 -- # IFS=: 00:06:59.041 19:27:45 -- accel/accel.sh@20 -- # read -r var val 00:06:59.041 19:27:45 -- accel/accel.sh@21 -- # val=32 00:06:59.041 19:27:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.041 19:27:45 -- accel/accel.sh@20 -- # IFS=: 00:06:59.041 19:27:45 -- accel/accel.sh@20 -- # read -r var val 00:06:59.041 19:27:45 -- accel/accel.sh@21 -- # val=1 00:06:59.042 19:27:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.042 19:27:45 -- accel/accel.sh@20 -- # IFS=: 00:06:59.042 19:27:45 -- accel/accel.sh@20 -- # read -r var val 00:06:59.042 19:27:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:59.042 19:27:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.042 19:27:45 -- accel/accel.sh@20 -- # IFS=: 00:06:59.042 19:27:45 -- accel/accel.sh@20 -- # read -r var val 00:06:59.042 19:27:45 -- accel/accel.sh@21 -- # val=Yes 00:06:59.042 19:27:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.042 19:27:45 -- accel/accel.sh@20 -- # IFS=: 00:06:59.042 19:27:45 -- accel/accel.sh@20 -- # read -r var val 00:06:59.042 19:27:45 -- accel/accel.sh@21 -- # val= 00:06:59.042 19:27:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.042 19:27:45 -- accel/accel.sh@20 -- # IFS=: 00:06:59.042 19:27:45 -- accel/accel.sh@20 -- # read -r var val 00:06:59.042 19:27:45 -- accel/accel.sh@21 -- # val= 00:06:59.042 19:27:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.042 19:27:45 -- accel/accel.sh@20 -- # IFS=: 00:06:59.042 19:27:45 -- accel/accel.sh@20 -- # read -r var val 00:07:00.418 19:27:46 -- accel/accel.sh@21 -- # val= 00:07:00.418 19:27:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.418 19:27:46 -- accel/accel.sh@20 -- # IFS=: 00:07:00.418 19:27:46 -- accel/accel.sh@20 -- # read -r var val 00:07:00.418 19:27:46 -- accel/accel.sh@21 -- # val= 00:07:00.418 19:27:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.418 19:27:46 -- accel/accel.sh@20 -- # IFS=: 00:07:00.418 19:27:46 -- accel/accel.sh@20 -- # read -r var val 00:07:00.418 19:27:46 -- accel/accel.sh@21 -- # val= 00:07:00.418 19:27:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.418 19:27:46 -- accel/accel.sh@20 -- # IFS=: 00:07:00.418 19:27:46 -- accel/accel.sh@20 -- # read -r var val 00:07:00.418 19:27:46 -- accel/accel.sh@21 -- # val= 00:07:00.418 19:27:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.418 19:27:46 -- accel/accel.sh@20 -- # IFS=: 00:07:00.418 19:27:46 -- accel/accel.sh@20 -- # read -r var val 00:07:00.418 19:27:46 -- accel/accel.sh@21 -- # val= 00:07:00.418 19:27:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.418 19:27:46 -- accel/accel.sh@20 -- # IFS=: 00:07:00.418 19:27:46 -- accel/accel.sh@20 -- # read -r var val 00:07:00.418 19:27:46 -- accel/accel.sh@21 -- # val= 00:07:00.418 19:27:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.418 19:27:46 -- accel/accel.sh@20 -- # IFS=: 00:07:00.418 19:27:46 -- accel/accel.sh@20 -- # read -r var val 00:07:00.418 19:27:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:00.418 19:27:46 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:00.418 19:27:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.418 00:07:00.418 real 0m2.972s 00:07:00.418 user 0m2.514s 00:07:00.418 sys 0m0.257s 00:07:00.418 19:27:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:00.418 19:27:46 -- common/autotest_common.sh@10 -- # set +x 00:07:00.418 ************************************ 00:07:00.418 END TEST accel_copy_crc32c 00:07:00.418 ************************************ 00:07:00.418 19:27:47 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:00.418 19:27:47 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:00.418 19:27:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:00.418 19:27:47 -- common/autotest_common.sh@10 -- # set +x 00:07:00.418 ************************************ 00:07:00.418 START TEST accel_copy_crc32c_C2 00:07:00.418 ************************************ 00:07:00.418 19:27:47 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:00.418 19:27:47 -- accel/accel.sh@16 -- # local accel_opc 00:07:00.418 19:27:47 -- accel/accel.sh@17 -- # local accel_module 00:07:00.418 19:27:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:00.418 19:27:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:00.418 19:27:47 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.418 19:27:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.418 19:27:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.418 19:27:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.418 19:27:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.418 19:27:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.418 19:27:47 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.418 19:27:47 -- accel/accel.sh@42 -- # jq -r . 00:07:00.418 [2024-12-15 19:27:47.080069] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:00.418 [2024-12-15 19:27:47.080171] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70472 ] 00:07:00.418 [2024-12-15 19:27:47.214643] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.418 [2024-12-15 19:27:47.278487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.794 19:27:48 -- accel/accel.sh@18 -- # out=' 00:07:01.794 SPDK Configuration: 00:07:01.794 Core mask: 0x1 00:07:01.794 00:07:01.794 Accel Perf Configuration: 00:07:01.794 Workload Type: copy_crc32c 00:07:01.794 CRC-32C seed: 0 00:07:01.794 Vector size: 4096 bytes 00:07:01.794 Transfer size: 8192 bytes 00:07:01.794 Vector count 2 00:07:01.794 Module: software 00:07:01.794 Queue depth: 32 00:07:01.794 Allocate depth: 32 00:07:01.794 # threads/core: 1 00:07:01.794 Run time: 1 seconds 00:07:01.794 Verify: Yes 00:07:01.794 00:07:01.794 Running for 1 seconds... 00:07:01.794 00:07:01.794 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:01.794 ------------------------------------------------------------------------------------ 00:07:01.794 0,0 223392/s 1745 MiB/s 0 0 00:07:01.794 ==================================================================================== 00:07:01.794 Total 223392/s 872 MiB/s 0 0' 00:07:01.794 19:27:48 -- accel/accel.sh@20 -- # IFS=: 00:07:01.794 19:27:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:01.794 19:27:48 -- accel/accel.sh@20 -- # read -r var val 00:07:01.794 19:27:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:01.794 19:27:48 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.794 19:27:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.794 19:27:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.794 19:27:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.794 19:27:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.794 19:27:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.794 19:27:48 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.794 19:27:48 -- accel/accel.sh@42 -- # jq -r . 00:07:01.794 [2024-12-15 19:27:48.589151] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:01.794 [2024-12-15 19:27:48.589397] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70491 ] 00:07:02.064 [2024-12-15 19:27:48.722669] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.064 [2024-12-15 19:27:48.781533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.064 19:27:48 -- accel/accel.sh@21 -- # val= 00:07:02.064 19:27:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.064 19:27:48 -- accel/accel.sh@20 -- # IFS=: 00:07:02.064 19:27:48 -- accel/accel.sh@20 -- # read -r var val 00:07:02.064 19:27:48 -- accel/accel.sh@21 -- # val= 00:07:02.064 19:27:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.064 19:27:48 -- accel/accel.sh@20 -- # IFS=: 00:07:02.064 19:27:48 -- accel/accel.sh@20 -- # read -r var val 00:07:02.064 19:27:48 -- accel/accel.sh@21 -- # val=0x1 00:07:02.064 19:27:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.064 19:27:48 -- accel/accel.sh@20 -- # IFS=: 00:07:02.064 19:27:48 -- accel/accel.sh@20 -- # read -r var val 00:07:02.064 19:27:48 -- accel/accel.sh@21 -- # val= 00:07:02.064 19:27:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.064 19:27:48 -- accel/accel.sh@20 -- # IFS=: 00:07:02.064 19:27:48 -- accel/accel.sh@20 -- # read -r var val 00:07:02.064 19:27:48 -- accel/accel.sh@21 -- # val= 00:07:02.064 19:27:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.064 19:27:48 -- accel/accel.sh@20 -- # IFS=: 00:07:02.064 19:27:48 -- accel/accel.sh@20 -- # read -r var val 00:07:02.064 19:27:48 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:02.064 19:27:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.064 19:27:48 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:02.064 19:27:48 -- accel/accel.sh@20 -- # IFS=: 00:07:02.064 19:27:48 -- accel/accel.sh@20 -- # read -r var val 00:07:02.064 19:27:48 -- accel/accel.sh@21 -- # val=0 00:07:02.064 19:27:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.064 19:27:48 -- accel/accel.sh@20 -- # IFS=: 00:07:02.064 19:27:48 -- accel/accel.sh@20 -- # read -r var val 00:07:02.064 19:27:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:02.064 19:27:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.064 19:27:48 -- accel/accel.sh@20 -- # IFS=: 00:07:02.064 19:27:48 -- accel/accel.sh@20 -- # read -r var val 00:07:02.064 19:27:48 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:02.064 19:27:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.064 19:27:48 -- accel/accel.sh@20 -- # IFS=: 00:07:02.064 19:27:48 -- accel/accel.sh@20 -- # read -r var val 00:07:02.064 19:27:48 -- accel/accel.sh@21 -- # val= 00:07:02.064 19:27:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.064 19:27:48 -- accel/accel.sh@20 -- # IFS=: 00:07:02.064 19:27:48 -- accel/accel.sh@20 -- # read -r var val 00:07:02.064 19:27:48 -- accel/accel.sh@21 -- # val=software 00:07:02.064 19:27:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.064 19:27:48 -- accel/accel.sh@23 -- # accel_module=software 00:07:02.064 19:27:48 -- accel/accel.sh@20 -- # IFS=: 00:07:02.064 19:27:48 -- accel/accel.sh@20 -- # read -r var val 00:07:02.064 19:27:48 -- accel/accel.sh@21 -- # val=32 00:07:02.064 19:27:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.064 19:27:48 -- accel/accel.sh@20 -- # IFS=: 00:07:02.064 19:27:48 -- accel/accel.sh@20 -- # read -r var val 00:07:02.064 19:27:48 -- accel/accel.sh@21 -- # val=32 00:07:02.064 19:27:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.064 19:27:48 -- accel/accel.sh@20 -- # IFS=: 00:07:02.064 19:27:48 -- accel/accel.sh@20 -- # read -r var val 00:07:02.064 19:27:48 -- accel/accel.sh@21 -- # val=1 00:07:02.064 19:27:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.064 19:27:48 -- accel/accel.sh@20 -- # IFS=: 00:07:02.064 19:27:48 -- accel/accel.sh@20 -- # read -r var val 00:07:02.064 19:27:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:02.064 19:27:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.064 19:27:48 -- accel/accel.sh@20 -- # IFS=: 00:07:02.064 19:27:48 -- accel/accel.sh@20 -- # read -r var val 00:07:02.064 19:27:48 -- accel/accel.sh@21 -- # val=Yes 00:07:02.064 19:27:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.064 19:27:48 -- accel/accel.sh@20 -- # IFS=: 00:07:02.064 19:27:48 -- accel/accel.sh@20 -- # read -r var val 00:07:02.064 19:27:48 -- accel/accel.sh@21 -- # val= 00:07:02.064 19:27:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.064 19:27:48 -- accel/accel.sh@20 -- # IFS=: 00:07:02.064 19:27:48 -- accel/accel.sh@20 -- # read -r var val 00:07:02.064 19:27:48 -- accel/accel.sh@21 -- # val= 00:07:02.064 19:27:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.064 19:27:48 -- accel/accel.sh@20 -- # IFS=: 00:07:02.064 19:27:48 -- accel/accel.sh@20 -- # read -r var val 00:07:03.495 19:27:50 -- accel/accel.sh@21 -- # val= 00:07:03.495 19:27:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.495 19:27:50 -- accel/accel.sh@20 -- # IFS=: 00:07:03.495 19:27:50 -- accel/accel.sh@20 -- # read -r var val 00:07:03.495 19:27:50 -- accel/accel.sh@21 -- # val= 00:07:03.495 19:27:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.495 19:27:50 -- accel/accel.sh@20 -- # IFS=: 00:07:03.495 19:27:50 -- accel/accel.sh@20 -- # read -r var val 00:07:03.495 19:27:50 -- accel/accel.sh@21 -- # val= 00:07:03.495 19:27:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.495 19:27:50 -- accel/accel.sh@20 -- # IFS=: 00:07:03.495 19:27:50 -- accel/accel.sh@20 -- # read -r var val 00:07:03.495 19:27:50 -- accel/accel.sh@21 -- # val= 00:07:03.495 19:27:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.495 19:27:50 -- accel/accel.sh@20 -- # IFS=: 00:07:03.495 19:27:50 -- accel/accel.sh@20 -- # read -r var val 00:07:03.495 19:27:50 -- accel/accel.sh@21 -- # val= 00:07:03.495 19:27:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.495 19:27:50 -- accel/accel.sh@20 -- # IFS=: 00:07:03.495 19:27:50 -- accel/accel.sh@20 -- # read -r var val 00:07:03.495 19:27:50 -- accel/accel.sh@21 -- # val= 00:07:03.495 19:27:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.495 19:27:50 -- accel/accel.sh@20 -- # IFS=: 00:07:03.495 19:27:50 -- accel/accel.sh@20 -- # read -r var val 00:07:03.495 19:27:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:03.495 19:27:50 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:03.495 19:27:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.495 00:07:03.495 real 0m3.017s 00:07:03.495 user 0m2.570s 00:07:03.495 sys 0m0.247s 00:07:03.495 19:27:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:03.495 ************************************ 00:07:03.495 END TEST accel_copy_crc32c_C2 00:07:03.495 ************************************ 00:07:03.495 19:27:50 -- common/autotest_common.sh@10 -- # set +x 00:07:03.495 19:27:50 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:03.495 19:27:50 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:03.495 19:27:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:03.495 19:27:50 -- common/autotest_common.sh@10 -- # set +x 00:07:03.495 ************************************ 00:07:03.495 START TEST accel_dualcast 00:07:03.495 ************************************ 00:07:03.495 19:27:50 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:07:03.495 19:27:50 -- accel/accel.sh@16 -- # local accel_opc 00:07:03.495 19:27:50 -- accel/accel.sh@17 -- # local accel_module 00:07:03.495 19:27:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:03.495 19:27:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:03.495 19:27:50 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.495 19:27:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.495 19:27:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.495 19:27:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.495 19:27:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.495 19:27:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.495 19:27:50 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.495 19:27:50 -- accel/accel.sh@42 -- # jq -r . 00:07:03.495 [2024-12-15 19:27:50.148857] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:03.495 [2024-12-15 19:27:50.148960] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70526 ] 00:07:03.495 [2024-12-15 19:27:50.283402] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.495 [2024-12-15 19:27:50.343633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.870 19:27:51 -- accel/accel.sh@18 -- # out=' 00:07:04.870 SPDK Configuration: 00:07:04.870 Core mask: 0x1 00:07:04.870 00:07:04.870 Accel Perf Configuration: 00:07:04.870 Workload Type: dualcast 00:07:04.870 Transfer size: 4096 bytes 00:07:04.870 Vector count 1 00:07:04.870 Module: software 00:07:04.870 Queue depth: 32 00:07:04.870 Allocate depth: 32 00:07:04.870 # threads/core: 1 00:07:04.870 Run time: 1 seconds 00:07:04.870 Verify: Yes 00:07:04.870 00:07:04.870 Running for 1 seconds... 00:07:04.870 00:07:04.870 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:04.870 ------------------------------------------------------------------------------------ 00:07:04.870 0,0 435680/s 1701 MiB/s 0 0 00:07:04.870 ==================================================================================== 00:07:04.870 Total 435680/s 1701 MiB/s 0 0' 00:07:04.870 19:27:51 -- accel/accel.sh@20 -- # IFS=: 00:07:04.870 19:27:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:04.870 19:27:51 -- accel/accel.sh@20 -- # read -r var val 00:07:04.870 19:27:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:04.870 19:27:51 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.870 19:27:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.870 19:27:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.870 19:27:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.870 19:27:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.870 19:27:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.870 19:27:51 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.870 19:27:51 -- accel/accel.sh@42 -- # jq -r . 00:07:04.870 [2024-12-15 19:27:51.626331] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:04.870 [2024-12-15 19:27:51.626413] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70546 ] 00:07:04.870 [2024-12-15 19:27:51.747659] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.129 [2024-12-15 19:27:51.825361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.129 19:27:51 -- accel/accel.sh@21 -- # val= 00:07:05.129 19:27:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.129 19:27:51 -- accel/accel.sh@20 -- # IFS=: 00:07:05.129 19:27:51 -- accel/accel.sh@20 -- # read -r var val 00:07:05.129 19:27:51 -- accel/accel.sh@21 -- # val= 00:07:05.129 19:27:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.129 19:27:51 -- accel/accel.sh@20 -- # IFS=: 00:07:05.129 19:27:51 -- accel/accel.sh@20 -- # read -r var val 00:07:05.129 19:27:51 -- accel/accel.sh@21 -- # val=0x1 00:07:05.129 19:27:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.129 19:27:51 -- accel/accel.sh@20 -- # IFS=: 00:07:05.129 19:27:51 -- accel/accel.sh@20 -- # read -r var val 00:07:05.129 19:27:51 -- accel/accel.sh@21 -- # val= 00:07:05.129 19:27:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.129 19:27:51 -- accel/accel.sh@20 -- # IFS=: 00:07:05.129 19:27:51 -- accel/accel.sh@20 -- # read -r var val 00:07:05.129 19:27:51 -- accel/accel.sh@21 -- # val= 00:07:05.129 19:27:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.129 19:27:51 -- accel/accel.sh@20 -- # IFS=: 00:07:05.129 19:27:51 -- accel/accel.sh@20 -- # read -r var val 00:07:05.129 19:27:51 -- accel/accel.sh@21 -- # val=dualcast 00:07:05.129 19:27:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.129 19:27:51 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:05.129 19:27:51 -- accel/accel.sh@20 -- # IFS=: 00:07:05.129 19:27:51 -- accel/accel.sh@20 -- # read -r var val 00:07:05.129 19:27:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:05.129 19:27:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.129 19:27:51 -- accel/accel.sh@20 -- # IFS=: 00:07:05.129 19:27:51 -- accel/accel.sh@20 -- # read -r var val 00:07:05.129 19:27:51 -- accel/accel.sh@21 -- # val= 00:07:05.129 19:27:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.129 19:27:51 -- accel/accel.sh@20 -- # IFS=: 00:07:05.129 19:27:51 -- accel/accel.sh@20 -- # read -r var val 00:07:05.129 19:27:51 -- accel/accel.sh@21 -- # val=software 00:07:05.129 19:27:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.129 19:27:51 -- accel/accel.sh@23 -- # accel_module=software 00:07:05.129 19:27:51 -- accel/accel.sh@20 -- # IFS=: 00:07:05.129 19:27:51 -- accel/accel.sh@20 -- # read -r var val 00:07:05.129 19:27:51 -- accel/accel.sh@21 -- # val=32 00:07:05.129 19:27:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.129 19:27:51 -- accel/accel.sh@20 -- # IFS=: 00:07:05.129 19:27:51 -- accel/accel.sh@20 -- # read -r var val 00:07:05.129 19:27:51 -- accel/accel.sh@21 -- # val=32 00:07:05.129 19:27:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.129 19:27:51 -- accel/accel.sh@20 -- # IFS=: 00:07:05.129 19:27:51 -- accel/accel.sh@20 -- # read -r var val 00:07:05.129 19:27:51 -- accel/accel.sh@21 -- # val=1 00:07:05.129 19:27:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.129 19:27:51 -- accel/accel.sh@20 -- # IFS=: 00:07:05.129 19:27:51 -- accel/accel.sh@20 -- # read -r var val 00:07:05.129 19:27:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:05.129 19:27:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.129 19:27:51 -- accel/accel.sh@20 -- # IFS=: 00:07:05.129 19:27:51 -- accel/accel.sh@20 -- # read -r var val 00:07:05.129 19:27:51 -- accel/accel.sh@21 -- # val=Yes 00:07:05.129 19:27:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.129 19:27:51 -- accel/accel.sh@20 -- # IFS=: 00:07:05.129 19:27:51 -- accel/accel.sh@20 -- # read -r var val 00:07:05.129 19:27:51 -- accel/accel.sh@21 -- # val= 00:07:05.129 19:27:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.129 19:27:51 -- accel/accel.sh@20 -- # IFS=: 00:07:05.129 19:27:51 -- accel/accel.sh@20 -- # read -r var val 00:07:05.129 19:27:51 -- accel/accel.sh@21 -- # val= 00:07:05.129 19:27:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.129 19:27:51 -- accel/accel.sh@20 -- # IFS=: 00:07:05.129 19:27:51 -- accel/accel.sh@20 -- # read -r var val 00:07:06.505 19:27:53 -- accel/accel.sh@21 -- # val= 00:07:06.505 19:27:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.505 19:27:53 -- accel/accel.sh@20 -- # IFS=: 00:07:06.505 19:27:53 -- accel/accel.sh@20 -- # read -r var val 00:07:06.505 19:27:53 -- accel/accel.sh@21 -- # val= 00:07:06.505 19:27:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.505 19:27:53 -- accel/accel.sh@20 -- # IFS=: 00:07:06.505 19:27:53 -- accel/accel.sh@20 -- # read -r var val 00:07:06.505 19:27:53 -- accel/accel.sh@21 -- # val= 00:07:06.505 19:27:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.505 19:27:53 -- accel/accel.sh@20 -- # IFS=: 00:07:06.505 19:27:53 -- accel/accel.sh@20 -- # read -r var val 00:07:06.505 19:27:53 -- accel/accel.sh@21 -- # val= 00:07:06.505 19:27:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.505 19:27:53 -- accel/accel.sh@20 -- # IFS=: 00:07:06.505 19:27:53 -- accel/accel.sh@20 -- # read -r var val 00:07:06.505 19:27:53 -- accel/accel.sh@21 -- # val= 00:07:06.505 19:27:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.505 19:27:53 -- accel/accel.sh@20 -- # IFS=: 00:07:06.505 19:27:53 -- accel/accel.sh@20 -- # read -r var val 00:07:06.505 19:27:53 -- accel/accel.sh@21 -- # val= 00:07:06.505 19:27:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.505 19:27:53 -- accel/accel.sh@20 -- # IFS=: 00:07:06.505 19:27:53 -- accel/accel.sh@20 -- # read -r var val 00:07:06.505 19:27:53 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:06.505 19:27:53 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:06.505 19:27:53 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.505 00:07:06.505 real 0m2.996s 00:07:06.505 user 0m2.535s 00:07:06.505 sys 0m0.259s 00:07:06.505 19:27:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:06.505 ************************************ 00:07:06.505 END TEST accel_dualcast 00:07:06.505 ************************************ 00:07:06.505 19:27:53 -- common/autotest_common.sh@10 -- # set +x 00:07:06.505 19:27:53 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:06.505 19:27:53 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:06.505 19:27:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:06.505 19:27:53 -- common/autotest_common.sh@10 -- # set +x 00:07:06.505 ************************************ 00:07:06.505 START TEST accel_compare 00:07:06.505 ************************************ 00:07:06.505 19:27:53 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:07:06.505 19:27:53 -- accel/accel.sh@16 -- # local accel_opc 00:07:06.505 19:27:53 -- accel/accel.sh@17 -- # local accel_module 00:07:06.505 19:27:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:06.505 19:27:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:06.505 19:27:53 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.505 19:27:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.505 19:27:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.505 19:27:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.505 19:27:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.505 19:27:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.505 19:27:53 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.505 19:27:53 -- accel/accel.sh@42 -- # jq -r . 00:07:06.505 [2024-12-15 19:27:53.205662] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:06.505 [2024-12-15 19:27:53.205746] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70581 ] 00:07:06.505 [2024-12-15 19:27:53.337296] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.764 [2024-12-15 19:27:53.411891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.140 19:27:54 -- accel/accel.sh@18 -- # out=' 00:07:08.140 SPDK Configuration: 00:07:08.140 Core mask: 0x1 00:07:08.140 00:07:08.140 Accel Perf Configuration: 00:07:08.140 Workload Type: compare 00:07:08.140 Transfer size: 4096 bytes 00:07:08.140 Vector count 1 00:07:08.140 Module: software 00:07:08.140 Queue depth: 32 00:07:08.140 Allocate depth: 32 00:07:08.140 # threads/core: 1 00:07:08.140 Run time: 1 seconds 00:07:08.140 Verify: Yes 00:07:08.140 00:07:08.140 Running for 1 seconds... 00:07:08.140 00:07:08.140 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:08.140 ------------------------------------------------------------------------------------ 00:07:08.140 0,0 569152/s 2223 MiB/s 0 0 00:07:08.140 ==================================================================================== 00:07:08.140 Total 569152/s 2223 MiB/s 0 0' 00:07:08.140 19:27:54 -- accel/accel.sh@20 -- # IFS=: 00:07:08.140 19:27:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:08.140 19:27:54 -- accel/accel.sh@20 -- # read -r var val 00:07:08.140 19:27:54 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.140 19:27:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:08.140 19:27:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.140 19:27:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.140 19:27:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.140 19:27:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.140 19:27:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.140 19:27:54 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.140 19:27:54 -- accel/accel.sh@42 -- # jq -r . 00:07:08.140 [2024-12-15 19:27:54.724853] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:08.140 [2024-12-15 19:27:54.724945] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70599 ] 00:07:08.140 [2024-12-15 19:27:54.856783] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.140 [2024-12-15 19:27:54.928300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.140 19:27:54 -- accel/accel.sh@21 -- # val= 00:07:08.140 19:27:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.140 19:27:54 -- accel/accel.sh@20 -- # IFS=: 00:07:08.140 19:27:54 -- accel/accel.sh@20 -- # read -r var val 00:07:08.140 19:27:55 -- accel/accel.sh@21 -- # val= 00:07:08.140 19:27:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.140 19:27:55 -- accel/accel.sh@20 -- # IFS=: 00:07:08.140 19:27:55 -- accel/accel.sh@20 -- # read -r var val 00:07:08.140 19:27:55 -- accel/accel.sh@21 -- # val=0x1 00:07:08.140 19:27:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.140 19:27:55 -- accel/accel.sh@20 -- # IFS=: 00:07:08.140 19:27:55 -- accel/accel.sh@20 -- # read -r var val 00:07:08.140 19:27:55 -- accel/accel.sh@21 -- # val= 00:07:08.140 19:27:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.140 19:27:55 -- accel/accel.sh@20 -- # IFS=: 00:07:08.140 19:27:55 -- accel/accel.sh@20 -- # read -r var val 00:07:08.140 19:27:55 -- accel/accel.sh@21 -- # val= 00:07:08.140 19:27:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.140 19:27:55 -- accel/accel.sh@20 -- # IFS=: 00:07:08.140 19:27:55 -- accel/accel.sh@20 -- # read -r var val 00:07:08.140 19:27:55 -- accel/accel.sh@21 -- # val=compare 00:07:08.140 19:27:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.140 19:27:55 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:08.140 19:27:55 -- accel/accel.sh@20 -- # IFS=: 00:07:08.140 19:27:55 -- accel/accel.sh@20 -- # read -r var val 00:07:08.140 19:27:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:08.140 19:27:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.140 19:27:55 -- accel/accel.sh@20 -- # IFS=: 00:07:08.140 19:27:55 -- accel/accel.sh@20 -- # read -r var val 00:07:08.140 19:27:55 -- accel/accel.sh@21 -- # val= 00:07:08.140 19:27:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.140 19:27:55 -- accel/accel.sh@20 -- # IFS=: 00:07:08.140 19:27:55 -- accel/accel.sh@20 -- # read -r var val 00:07:08.140 19:27:55 -- accel/accel.sh@21 -- # val=software 00:07:08.140 19:27:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.140 19:27:55 -- accel/accel.sh@23 -- # accel_module=software 00:07:08.140 19:27:55 -- accel/accel.sh@20 -- # IFS=: 00:07:08.140 19:27:55 -- accel/accel.sh@20 -- # read -r var val 00:07:08.140 19:27:55 -- accel/accel.sh@21 -- # val=32 00:07:08.140 19:27:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.140 19:27:55 -- accel/accel.sh@20 -- # IFS=: 00:07:08.140 19:27:55 -- accel/accel.sh@20 -- # read -r var val 00:07:08.140 19:27:55 -- accel/accel.sh@21 -- # val=32 00:07:08.140 19:27:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.140 19:27:55 -- accel/accel.sh@20 -- # IFS=: 00:07:08.140 19:27:55 -- accel/accel.sh@20 -- # read -r var val 00:07:08.140 19:27:55 -- accel/accel.sh@21 -- # val=1 00:07:08.140 19:27:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.140 19:27:55 -- accel/accel.sh@20 -- # IFS=: 00:07:08.140 19:27:55 -- accel/accel.sh@20 -- # read -r var val 00:07:08.140 19:27:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:08.140 19:27:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.140 19:27:55 -- accel/accel.sh@20 -- # IFS=: 00:07:08.140 19:27:55 -- accel/accel.sh@20 -- # read -r var val 00:07:08.140 19:27:55 -- accel/accel.sh@21 -- # val=Yes 00:07:08.140 19:27:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.140 19:27:55 -- accel/accel.sh@20 -- # IFS=: 00:07:08.140 19:27:55 -- accel/accel.sh@20 -- # read -r var val 00:07:08.140 19:27:55 -- accel/accel.sh@21 -- # val= 00:07:08.141 19:27:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.141 19:27:55 -- accel/accel.sh@20 -- # IFS=: 00:07:08.141 19:27:55 -- accel/accel.sh@20 -- # read -r var val 00:07:08.141 19:27:55 -- accel/accel.sh@21 -- # val= 00:07:08.141 19:27:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.141 19:27:55 -- accel/accel.sh@20 -- # IFS=: 00:07:08.141 19:27:55 -- accel/accel.sh@20 -- # read -r var val 00:07:09.517 19:27:56 -- accel/accel.sh@21 -- # val= 00:07:09.517 19:27:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.517 19:27:56 -- accel/accel.sh@20 -- # IFS=: 00:07:09.517 19:27:56 -- accel/accel.sh@20 -- # read -r var val 00:07:09.517 19:27:56 -- accel/accel.sh@21 -- # val= 00:07:09.517 19:27:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.517 19:27:56 -- accel/accel.sh@20 -- # IFS=: 00:07:09.517 19:27:56 -- accel/accel.sh@20 -- # read -r var val 00:07:09.517 19:27:56 -- accel/accel.sh@21 -- # val= 00:07:09.517 19:27:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.517 19:27:56 -- accel/accel.sh@20 -- # IFS=: 00:07:09.517 19:27:56 -- accel/accel.sh@20 -- # read -r var val 00:07:09.517 19:27:56 -- accel/accel.sh@21 -- # val= 00:07:09.517 19:27:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.517 19:27:56 -- accel/accel.sh@20 -- # IFS=: 00:07:09.517 19:27:56 -- accel/accel.sh@20 -- # read -r var val 00:07:09.517 19:27:56 -- accel/accel.sh@21 -- # val= 00:07:09.517 19:27:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.517 19:27:56 -- accel/accel.sh@20 -- # IFS=: 00:07:09.517 19:27:56 -- accel/accel.sh@20 -- # read -r var val 00:07:09.517 19:27:56 -- accel/accel.sh@21 -- # val= 00:07:09.517 19:27:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.517 19:27:56 -- accel/accel.sh@20 -- # IFS=: 00:07:09.517 19:27:56 -- accel/accel.sh@20 -- # read -r var val 00:07:09.517 19:27:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:09.517 19:27:56 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:09.517 19:27:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.517 00:07:09.517 real 0m3.071s 00:07:09.517 user 0m2.593s 00:07:09.517 sys 0m0.276s 00:07:09.517 19:27:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:09.517 19:27:56 -- common/autotest_common.sh@10 -- # set +x 00:07:09.517 ************************************ 00:07:09.517 END TEST accel_compare 00:07:09.517 ************************************ 00:07:09.517 19:27:56 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:09.517 19:27:56 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:09.517 19:27:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:09.517 19:27:56 -- common/autotest_common.sh@10 -- # set +x 00:07:09.517 ************************************ 00:07:09.517 START TEST accel_xor 00:07:09.517 ************************************ 00:07:09.517 19:27:56 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:07:09.517 19:27:56 -- accel/accel.sh@16 -- # local accel_opc 00:07:09.517 19:27:56 -- accel/accel.sh@17 -- # local accel_module 00:07:09.517 19:27:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:09.517 19:27:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:09.517 19:27:56 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.517 19:27:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.517 19:27:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.517 19:27:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.517 19:27:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.517 19:27:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.517 19:27:56 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.517 19:27:56 -- accel/accel.sh@42 -- # jq -r . 00:07:09.517 [2024-12-15 19:27:56.331705] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:09.517 [2024-12-15 19:27:56.331801] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70635 ] 00:07:09.775 [2024-12-15 19:27:56.467002] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.775 [2024-12-15 19:27:56.584474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.152 19:27:57 -- accel/accel.sh@18 -- # out=' 00:07:11.152 SPDK Configuration: 00:07:11.152 Core mask: 0x1 00:07:11.152 00:07:11.152 Accel Perf Configuration: 00:07:11.152 Workload Type: xor 00:07:11.152 Source buffers: 2 00:07:11.152 Transfer size: 4096 bytes 00:07:11.152 Vector count 1 00:07:11.152 Module: software 00:07:11.152 Queue depth: 32 00:07:11.152 Allocate depth: 32 00:07:11.152 # threads/core: 1 00:07:11.152 Run time: 1 seconds 00:07:11.152 Verify: Yes 00:07:11.152 00:07:11.152 Running for 1 seconds... 00:07:11.152 00:07:11.152 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:11.152 ------------------------------------------------------------------------------------ 00:07:11.152 0,0 265856/s 1038 MiB/s 0 0 00:07:11.152 ==================================================================================== 00:07:11.152 Total 265856/s 1038 MiB/s 0 0' 00:07:11.152 19:27:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:11.152 19:27:57 -- accel/accel.sh@20 -- # IFS=: 00:07:11.152 19:27:57 -- accel/accel.sh@20 -- # read -r var val 00:07:11.152 19:27:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:11.152 19:27:57 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.152 19:27:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.152 19:27:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.152 19:27:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.152 19:27:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.152 19:27:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.153 19:27:57 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.153 19:27:57 -- accel/accel.sh@42 -- # jq -r . 00:07:11.153 [2024-12-15 19:27:57.916499] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:11.153 [2024-12-15 19:27:57.916735] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70654 ] 00:07:11.153 [2024-12-15 19:27:58.037923] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.411 [2024-12-15 19:27:58.100235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.411 19:27:58 -- accel/accel.sh@21 -- # val= 00:07:11.411 19:27:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.411 19:27:58 -- accel/accel.sh@20 -- # IFS=: 00:07:11.411 19:27:58 -- accel/accel.sh@20 -- # read -r var val 00:07:11.411 19:27:58 -- accel/accel.sh@21 -- # val= 00:07:11.411 19:27:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.411 19:27:58 -- accel/accel.sh@20 -- # IFS=: 00:07:11.411 19:27:58 -- accel/accel.sh@20 -- # read -r var val 00:07:11.411 19:27:58 -- accel/accel.sh@21 -- # val=0x1 00:07:11.411 19:27:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.411 19:27:58 -- accel/accel.sh@20 -- # IFS=: 00:07:11.411 19:27:58 -- accel/accel.sh@20 -- # read -r var val 00:07:11.411 19:27:58 -- accel/accel.sh@21 -- # val= 00:07:11.411 19:27:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.411 19:27:58 -- accel/accel.sh@20 -- # IFS=: 00:07:11.411 19:27:58 -- accel/accel.sh@20 -- # read -r var val 00:07:11.411 19:27:58 -- accel/accel.sh@21 -- # val= 00:07:11.411 19:27:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.411 19:27:58 -- accel/accel.sh@20 -- # IFS=: 00:07:11.411 19:27:58 -- accel/accel.sh@20 -- # read -r var val 00:07:11.411 19:27:58 -- accel/accel.sh@21 -- # val=xor 00:07:11.411 19:27:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.411 19:27:58 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:11.411 19:27:58 -- accel/accel.sh@20 -- # IFS=: 00:07:11.411 19:27:58 -- accel/accel.sh@20 -- # read -r var val 00:07:11.411 19:27:58 -- accel/accel.sh@21 -- # val=2 00:07:11.411 19:27:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.411 19:27:58 -- accel/accel.sh@20 -- # IFS=: 00:07:11.411 19:27:58 -- accel/accel.sh@20 -- # read -r var val 00:07:11.411 19:27:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:11.412 19:27:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.412 19:27:58 -- accel/accel.sh@20 -- # IFS=: 00:07:11.412 19:27:58 -- accel/accel.sh@20 -- # read -r var val 00:07:11.412 19:27:58 -- accel/accel.sh@21 -- # val= 00:07:11.412 19:27:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.412 19:27:58 -- accel/accel.sh@20 -- # IFS=: 00:07:11.412 19:27:58 -- accel/accel.sh@20 -- # read -r var val 00:07:11.412 19:27:58 -- accel/accel.sh@21 -- # val=software 00:07:11.412 19:27:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.412 19:27:58 -- accel/accel.sh@23 -- # accel_module=software 00:07:11.412 19:27:58 -- accel/accel.sh@20 -- # IFS=: 00:07:11.412 19:27:58 -- accel/accel.sh@20 -- # read -r var val 00:07:11.412 19:27:58 -- accel/accel.sh@21 -- # val=32 00:07:11.412 19:27:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.412 19:27:58 -- accel/accel.sh@20 -- # IFS=: 00:07:11.412 19:27:58 -- accel/accel.sh@20 -- # read -r var val 00:07:11.412 19:27:58 -- accel/accel.sh@21 -- # val=32 00:07:11.412 19:27:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.412 19:27:58 -- accel/accel.sh@20 -- # IFS=: 00:07:11.412 19:27:58 -- accel/accel.sh@20 -- # read -r var val 00:07:11.412 19:27:58 -- accel/accel.sh@21 -- # val=1 00:07:11.412 19:27:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.412 19:27:58 -- accel/accel.sh@20 -- # IFS=: 00:07:11.412 19:27:58 -- accel/accel.sh@20 -- # read -r var val 00:07:11.412 19:27:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:11.412 19:27:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.412 19:27:58 -- accel/accel.sh@20 -- # IFS=: 00:07:11.412 19:27:58 -- accel/accel.sh@20 -- # read -r var val 00:07:11.412 19:27:58 -- accel/accel.sh@21 -- # val=Yes 00:07:11.412 19:27:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.412 19:27:58 -- accel/accel.sh@20 -- # IFS=: 00:07:11.412 19:27:58 -- accel/accel.sh@20 -- # read -r var val 00:07:11.412 19:27:58 -- accel/accel.sh@21 -- # val= 00:07:11.412 19:27:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.412 19:27:58 -- accel/accel.sh@20 -- # IFS=: 00:07:11.412 19:27:58 -- accel/accel.sh@20 -- # read -r var val 00:07:11.412 19:27:58 -- accel/accel.sh@21 -- # val= 00:07:11.412 19:27:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.412 19:27:58 -- accel/accel.sh@20 -- # IFS=: 00:07:11.412 19:27:58 -- accel/accel.sh@20 -- # read -r var val 00:07:12.789 19:27:59 -- accel/accel.sh@21 -- # val= 00:07:12.789 19:27:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.789 19:27:59 -- accel/accel.sh@20 -- # IFS=: 00:07:12.789 19:27:59 -- accel/accel.sh@20 -- # read -r var val 00:07:12.789 19:27:59 -- accel/accel.sh@21 -- # val= 00:07:12.789 19:27:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.789 19:27:59 -- accel/accel.sh@20 -- # IFS=: 00:07:12.789 19:27:59 -- accel/accel.sh@20 -- # read -r var val 00:07:12.789 19:27:59 -- accel/accel.sh@21 -- # val= 00:07:12.789 19:27:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.789 19:27:59 -- accel/accel.sh@20 -- # IFS=: 00:07:12.789 19:27:59 -- accel/accel.sh@20 -- # read -r var val 00:07:12.789 19:27:59 -- accel/accel.sh@21 -- # val= 00:07:12.789 19:27:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.789 19:27:59 -- accel/accel.sh@20 -- # IFS=: 00:07:12.789 19:27:59 -- accel/accel.sh@20 -- # read -r var val 00:07:12.789 19:27:59 -- accel/accel.sh@21 -- # val= 00:07:12.789 19:27:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.789 19:27:59 -- accel/accel.sh@20 -- # IFS=: 00:07:12.789 19:27:59 -- accel/accel.sh@20 -- # read -r var val 00:07:12.789 19:27:59 -- accel/accel.sh@21 -- # val= 00:07:12.789 19:27:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.789 19:27:59 -- accel/accel.sh@20 -- # IFS=: 00:07:12.789 19:27:59 -- accel/accel.sh@20 -- # read -r var val 00:07:12.789 19:27:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:12.789 19:27:59 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:12.789 19:27:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.789 00:07:12.789 real 0m3.088s 00:07:12.789 user 0m2.617s 00:07:12.789 sys 0m0.264s 00:07:12.789 19:27:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:12.789 19:27:59 -- common/autotest_common.sh@10 -- # set +x 00:07:12.789 ************************************ 00:07:12.789 END TEST accel_xor 00:07:12.789 ************************************ 00:07:12.789 19:27:59 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:12.789 19:27:59 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:12.789 19:27:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:12.789 19:27:59 -- common/autotest_common.sh@10 -- # set +x 00:07:12.789 ************************************ 00:07:12.789 START TEST accel_xor 00:07:12.789 ************************************ 00:07:12.789 19:27:59 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:07:12.789 19:27:59 -- accel/accel.sh@16 -- # local accel_opc 00:07:12.789 19:27:59 -- accel/accel.sh@17 -- # local accel_module 00:07:12.789 19:27:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:12.789 19:27:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:12.789 19:27:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.789 19:27:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.789 19:27:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.789 19:27:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.789 19:27:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.789 19:27:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.789 19:27:59 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.789 19:27:59 -- accel/accel.sh@42 -- # jq -r . 00:07:12.789 [2024-12-15 19:27:59.473663] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:12.789 [2024-12-15 19:27:59.473756] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70689 ] 00:07:12.789 [2024-12-15 19:27:59.610749] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.789 [2024-12-15 19:27:59.679592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.165 19:28:00 -- accel/accel.sh@18 -- # out=' 00:07:14.165 SPDK Configuration: 00:07:14.165 Core mask: 0x1 00:07:14.165 00:07:14.165 Accel Perf Configuration: 00:07:14.165 Workload Type: xor 00:07:14.165 Source buffers: 3 00:07:14.165 Transfer size: 4096 bytes 00:07:14.165 Vector count 1 00:07:14.165 Module: software 00:07:14.165 Queue depth: 32 00:07:14.165 Allocate depth: 32 00:07:14.165 # threads/core: 1 00:07:14.165 Run time: 1 seconds 00:07:14.165 Verify: Yes 00:07:14.165 00:07:14.165 Running for 1 seconds... 00:07:14.165 00:07:14.165 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:14.165 ------------------------------------------------------------------------------------ 00:07:14.165 0,0 254112/s 992 MiB/s 0 0 00:07:14.165 ==================================================================================== 00:07:14.165 Total 254112/s 992 MiB/s 0 0' 00:07:14.165 19:28:00 -- accel/accel.sh@20 -- # IFS=: 00:07:14.165 19:28:00 -- accel/accel.sh@20 -- # read -r var val 00:07:14.165 19:28:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:14.165 19:28:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:14.165 19:28:00 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.165 19:28:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.165 19:28:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.165 19:28:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.165 19:28:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.165 19:28:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.165 19:28:00 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.165 19:28:00 -- accel/accel.sh@42 -- # jq -r . 00:07:14.165 [2024-12-15 19:28:00.958216] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:14.165 [2024-12-15 19:28:00.958325] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70703 ] 00:07:14.424 [2024-12-15 19:28:01.091978] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.424 [2024-12-15 19:28:01.164584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.424 19:28:01 -- accel/accel.sh@21 -- # val= 00:07:14.424 19:28:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.424 19:28:01 -- accel/accel.sh@20 -- # IFS=: 00:07:14.424 19:28:01 -- accel/accel.sh@20 -- # read -r var val 00:07:14.424 19:28:01 -- accel/accel.sh@21 -- # val= 00:07:14.424 19:28:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.424 19:28:01 -- accel/accel.sh@20 -- # IFS=: 00:07:14.424 19:28:01 -- accel/accel.sh@20 -- # read -r var val 00:07:14.424 19:28:01 -- accel/accel.sh@21 -- # val=0x1 00:07:14.424 19:28:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.424 19:28:01 -- accel/accel.sh@20 -- # IFS=: 00:07:14.424 19:28:01 -- accel/accel.sh@20 -- # read -r var val 00:07:14.424 19:28:01 -- accel/accel.sh@21 -- # val= 00:07:14.424 19:28:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.424 19:28:01 -- accel/accel.sh@20 -- # IFS=: 00:07:14.424 19:28:01 -- accel/accel.sh@20 -- # read -r var val 00:07:14.424 19:28:01 -- accel/accel.sh@21 -- # val= 00:07:14.424 19:28:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.424 19:28:01 -- accel/accel.sh@20 -- # IFS=: 00:07:14.424 19:28:01 -- accel/accel.sh@20 -- # read -r var val 00:07:14.424 19:28:01 -- accel/accel.sh@21 -- # val=xor 00:07:14.424 19:28:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.424 19:28:01 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:14.424 19:28:01 -- accel/accel.sh@20 -- # IFS=: 00:07:14.424 19:28:01 -- accel/accel.sh@20 -- # read -r var val 00:07:14.424 19:28:01 -- accel/accel.sh@21 -- # val=3 00:07:14.424 19:28:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.424 19:28:01 -- accel/accel.sh@20 -- # IFS=: 00:07:14.424 19:28:01 -- accel/accel.sh@20 -- # read -r var val 00:07:14.424 19:28:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:14.424 19:28:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.424 19:28:01 -- accel/accel.sh@20 -- # IFS=: 00:07:14.424 19:28:01 -- accel/accel.sh@20 -- # read -r var val 00:07:14.424 19:28:01 -- accel/accel.sh@21 -- # val= 00:07:14.424 19:28:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.424 19:28:01 -- accel/accel.sh@20 -- # IFS=: 00:07:14.424 19:28:01 -- accel/accel.sh@20 -- # read -r var val 00:07:14.424 19:28:01 -- accel/accel.sh@21 -- # val=software 00:07:14.424 19:28:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.424 19:28:01 -- accel/accel.sh@23 -- # accel_module=software 00:07:14.424 19:28:01 -- accel/accel.sh@20 -- # IFS=: 00:07:14.424 19:28:01 -- accel/accel.sh@20 -- # read -r var val 00:07:14.424 19:28:01 -- accel/accel.sh@21 -- # val=32 00:07:14.424 19:28:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.424 19:28:01 -- accel/accel.sh@20 -- # IFS=: 00:07:14.424 19:28:01 -- accel/accel.sh@20 -- # read -r var val 00:07:14.424 19:28:01 -- accel/accel.sh@21 -- # val=32 00:07:14.424 19:28:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.424 19:28:01 -- accel/accel.sh@20 -- # IFS=: 00:07:14.424 19:28:01 -- accel/accel.sh@20 -- # read -r var val 00:07:14.424 19:28:01 -- accel/accel.sh@21 -- # val=1 00:07:14.424 19:28:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.424 19:28:01 -- accel/accel.sh@20 -- # IFS=: 00:07:14.424 19:28:01 -- accel/accel.sh@20 -- # read -r var val 00:07:14.424 19:28:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:14.424 19:28:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.424 19:28:01 -- accel/accel.sh@20 -- # IFS=: 00:07:14.424 19:28:01 -- accel/accel.sh@20 -- # read -r var val 00:07:14.424 19:28:01 -- accel/accel.sh@21 -- # val=Yes 00:07:14.424 19:28:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.424 19:28:01 -- accel/accel.sh@20 -- # IFS=: 00:07:14.424 19:28:01 -- accel/accel.sh@20 -- # read -r var val 00:07:14.424 19:28:01 -- accel/accel.sh@21 -- # val= 00:07:14.424 19:28:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.424 19:28:01 -- accel/accel.sh@20 -- # IFS=: 00:07:14.424 19:28:01 -- accel/accel.sh@20 -- # read -r var val 00:07:14.424 19:28:01 -- accel/accel.sh@21 -- # val= 00:07:14.424 19:28:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.424 19:28:01 -- accel/accel.sh@20 -- # IFS=: 00:07:14.424 19:28:01 -- accel/accel.sh@20 -- # read -r var val 00:07:15.825 19:28:02 -- accel/accel.sh@21 -- # val= 00:07:15.825 19:28:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.825 19:28:02 -- accel/accel.sh@20 -- # IFS=: 00:07:15.825 19:28:02 -- accel/accel.sh@20 -- # read -r var val 00:07:15.825 19:28:02 -- accel/accel.sh@21 -- # val= 00:07:15.825 19:28:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.825 19:28:02 -- accel/accel.sh@20 -- # IFS=: 00:07:15.825 19:28:02 -- accel/accel.sh@20 -- # read -r var val 00:07:15.825 19:28:02 -- accel/accel.sh@21 -- # val= 00:07:15.825 19:28:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.825 19:28:02 -- accel/accel.sh@20 -- # IFS=: 00:07:15.825 19:28:02 -- accel/accel.sh@20 -- # read -r var val 00:07:15.825 19:28:02 -- accel/accel.sh@21 -- # val= 00:07:15.825 19:28:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.825 19:28:02 -- accel/accel.sh@20 -- # IFS=: 00:07:15.825 19:28:02 -- accel/accel.sh@20 -- # read -r var val 00:07:15.825 19:28:02 -- accel/accel.sh@21 -- # val= 00:07:15.825 19:28:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.825 19:28:02 -- accel/accel.sh@20 -- # IFS=: 00:07:15.825 19:28:02 -- accel/accel.sh@20 -- # read -r var val 00:07:15.825 19:28:02 -- accel/accel.sh@21 -- # val= 00:07:15.825 19:28:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.825 19:28:02 -- accel/accel.sh@20 -- # IFS=: 00:07:15.825 19:28:02 -- accel/accel.sh@20 -- # read -r var val 00:07:15.825 19:28:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:15.825 19:28:02 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:15.825 19:28:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.825 ************************************ 00:07:15.825 END TEST accel_xor 00:07:15.825 ************************************ 00:07:15.825 00:07:15.825 real 0m3.004s 00:07:15.825 user 0m2.525s 00:07:15.825 sys 0m0.274s 00:07:15.825 19:28:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:15.825 19:28:02 -- common/autotest_common.sh@10 -- # set +x 00:07:15.825 19:28:02 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:15.825 19:28:02 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:15.825 19:28:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:15.825 19:28:02 -- common/autotest_common.sh@10 -- # set +x 00:07:15.825 ************************************ 00:07:15.825 START TEST accel_dif_verify 00:07:15.825 ************************************ 00:07:15.825 19:28:02 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:07:15.825 19:28:02 -- accel/accel.sh@16 -- # local accel_opc 00:07:15.825 19:28:02 -- accel/accel.sh@17 -- # local accel_module 00:07:15.825 19:28:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:15.825 19:28:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:15.825 19:28:02 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.825 19:28:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.825 19:28:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.825 19:28:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.825 19:28:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.825 19:28:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.825 19:28:02 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.825 19:28:02 -- accel/accel.sh@42 -- # jq -r . 00:07:15.825 [2024-12-15 19:28:02.532948] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:15.825 [2024-12-15 19:28:02.533044] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70743 ] 00:07:15.825 [2024-12-15 19:28:02.665659] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.084 [2024-12-15 19:28:02.794109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.460 19:28:04 -- accel/accel.sh@18 -- # out=' 00:07:17.460 SPDK Configuration: 00:07:17.460 Core mask: 0x1 00:07:17.460 00:07:17.460 Accel Perf Configuration: 00:07:17.460 Workload Type: dif_verify 00:07:17.460 Vector size: 4096 bytes 00:07:17.460 Transfer size: 4096 bytes 00:07:17.460 Block size: 512 bytes 00:07:17.460 Metadata size: 8 bytes 00:07:17.460 Vector count 1 00:07:17.460 Module: software 00:07:17.460 Queue depth: 32 00:07:17.460 Allocate depth: 32 00:07:17.460 # threads/core: 1 00:07:17.460 Run time: 1 seconds 00:07:17.460 Verify: No 00:07:17.460 00:07:17.460 Running for 1 seconds... 00:07:17.460 00:07:17.460 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:17.460 ------------------------------------------------------------------------------------ 00:07:17.460 0,0 125664/s 498 MiB/s 0 0 00:07:17.460 ==================================================================================== 00:07:17.460 Total 125664/s 490 MiB/s 0 0' 00:07:17.460 19:28:04 -- accel/accel.sh@20 -- # IFS=: 00:07:17.460 19:28:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:17.460 19:28:04 -- accel/accel.sh@20 -- # read -r var val 00:07:17.460 19:28:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:17.460 19:28:04 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.460 19:28:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.460 19:28:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.460 19:28:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.460 19:28:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.460 19:28:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.460 19:28:04 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.460 19:28:04 -- accel/accel.sh@42 -- # jq -r . 00:07:17.460 [2024-12-15 19:28:04.110773] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:17.460 [2024-12-15 19:28:04.111077] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70757 ] 00:07:17.460 [2024-12-15 19:28:04.247310] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.460 [2024-12-15 19:28:04.318448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.719 19:28:04 -- accel/accel.sh@21 -- # val= 00:07:17.719 19:28:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.719 19:28:04 -- accel/accel.sh@20 -- # IFS=: 00:07:17.719 19:28:04 -- accel/accel.sh@20 -- # read -r var val 00:07:17.719 19:28:04 -- accel/accel.sh@21 -- # val= 00:07:17.719 19:28:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.719 19:28:04 -- accel/accel.sh@20 -- # IFS=: 00:07:17.719 19:28:04 -- accel/accel.sh@20 -- # read -r var val 00:07:17.719 19:28:04 -- accel/accel.sh@21 -- # val=0x1 00:07:17.719 19:28:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.719 19:28:04 -- accel/accel.sh@20 -- # IFS=: 00:07:17.719 19:28:04 -- accel/accel.sh@20 -- # read -r var val 00:07:17.719 19:28:04 -- accel/accel.sh@21 -- # val= 00:07:17.719 19:28:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.719 19:28:04 -- accel/accel.sh@20 -- # IFS=: 00:07:17.719 19:28:04 -- accel/accel.sh@20 -- # read -r var val 00:07:17.719 19:28:04 -- accel/accel.sh@21 -- # val= 00:07:17.719 19:28:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.719 19:28:04 -- accel/accel.sh@20 -- # IFS=: 00:07:17.719 19:28:04 -- accel/accel.sh@20 -- # read -r var val 00:07:17.719 19:28:04 -- accel/accel.sh@21 -- # val=dif_verify 00:07:17.719 19:28:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.719 19:28:04 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:17.719 19:28:04 -- accel/accel.sh@20 -- # IFS=: 00:07:17.719 19:28:04 -- accel/accel.sh@20 -- # read -r var val 00:07:17.719 19:28:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:17.719 19:28:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.719 19:28:04 -- accel/accel.sh@20 -- # IFS=: 00:07:17.719 19:28:04 -- accel/accel.sh@20 -- # read -r var val 00:07:17.719 19:28:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:17.719 19:28:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.719 19:28:04 -- accel/accel.sh@20 -- # IFS=: 00:07:17.719 19:28:04 -- accel/accel.sh@20 -- # read -r var val 00:07:17.719 19:28:04 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:17.719 19:28:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.719 19:28:04 -- accel/accel.sh@20 -- # IFS=: 00:07:17.719 19:28:04 -- accel/accel.sh@20 -- # read -r var val 00:07:17.719 19:28:04 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:17.719 19:28:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.719 19:28:04 -- accel/accel.sh@20 -- # IFS=: 00:07:17.719 19:28:04 -- accel/accel.sh@20 -- # read -r var val 00:07:17.719 19:28:04 -- accel/accel.sh@21 -- # val= 00:07:17.719 19:28:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.719 19:28:04 -- accel/accel.sh@20 -- # IFS=: 00:07:17.719 19:28:04 -- accel/accel.sh@20 -- # read -r var val 00:07:17.719 19:28:04 -- accel/accel.sh@21 -- # val=software 00:07:17.719 19:28:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.719 19:28:04 -- accel/accel.sh@23 -- # accel_module=software 00:07:17.719 19:28:04 -- accel/accel.sh@20 -- # IFS=: 00:07:17.719 19:28:04 -- accel/accel.sh@20 -- # read -r var val 00:07:17.719 19:28:04 -- accel/accel.sh@21 -- # val=32 00:07:17.719 19:28:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.719 19:28:04 -- accel/accel.sh@20 -- # IFS=: 00:07:17.719 19:28:04 -- accel/accel.sh@20 -- # read -r var val 00:07:17.719 19:28:04 -- accel/accel.sh@21 -- # val=32 00:07:17.719 19:28:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.719 19:28:04 -- accel/accel.sh@20 -- # IFS=: 00:07:17.719 19:28:04 -- accel/accel.sh@20 -- # read -r var val 00:07:17.719 19:28:04 -- accel/accel.sh@21 -- # val=1 00:07:17.719 19:28:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.719 19:28:04 -- accel/accel.sh@20 -- # IFS=: 00:07:17.719 19:28:04 -- accel/accel.sh@20 -- # read -r var val 00:07:17.719 19:28:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:17.719 19:28:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.719 19:28:04 -- accel/accel.sh@20 -- # IFS=: 00:07:17.719 19:28:04 -- accel/accel.sh@20 -- # read -r var val 00:07:17.719 19:28:04 -- accel/accel.sh@21 -- # val=No 00:07:17.719 19:28:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.719 19:28:04 -- accel/accel.sh@20 -- # IFS=: 00:07:17.719 19:28:04 -- accel/accel.sh@20 -- # read -r var val 00:07:17.719 19:28:04 -- accel/accel.sh@21 -- # val= 00:07:17.719 19:28:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.719 19:28:04 -- accel/accel.sh@20 -- # IFS=: 00:07:17.719 19:28:04 -- accel/accel.sh@20 -- # read -r var val 00:07:17.719 19:28:04 -- accel/accel.sh@21 -- # val= 00:07:17.719 19:28:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.719 19:28:04 -- accel/accel.sh@20 -- # IFS=: 00:07:17.719 19:28:04 -- accel/accel.sh@20 -- # read -r var val 00:07:19.095 19:28:05 -- accel/accel.sh@21 -- # val= 00:07:19.095 19:28:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.095 19:28:05 -- accel/accel.sh@20 -- # IFS=: 00:07:19.095 19:28:05 -- accel/accel.sh@20 -- # read -r var val 00:07:19.095 19:28:05 -- accel/accel.sh@21 -- # val= 00:07:19.095 19:28:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.095 19:28:05 -- accel/accel.sh@20 -- # IFS=: 00:07:19.095 19:28:05 -- accel/accel.sh@20 -- # read -r var val 00:07:19.095 19:28:05 -- accel/accel.sh@21 -- # val= 00:07:19.095 19:28:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.095 19:28:05 -- accel/accel.sh@20 -- # IFS=: 00:07:19.095 19:28:05 -- accel/accel.sh@20 -- # read -r var val 00:07:19.095 19:28:05 -- accel/accel.sh@21 -- # val= 00:07:19.095 19:28:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.095 19:28:05 -- accel/accel.sh@20 -- # IFS=: 00:07:19.095 19:28:05 -- accel/accel.sh@20 -- # read -r var val 00:07:19.095 19:28:05 -- accel/accel.sh@21 -- # val= 00:07:19.095 19:28:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.095 19:28:05 -- accel/accel.sh@20 -- # IFS=: 00:07:19.095 19:28:05 -- accel/accel.sh@20 -- # read -r var val 00:07:19.095 19:28:05 -- accel/accel.sh@21 -- # val= 00:07:19.095 19:28:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.095 19:28:05 -- accel/accel.sh@20 -- # IFS=: 00:07:19.095 19:28:05 -- accel/accel.sh@20 -- # read -r var val 00:07:19.095 19:28:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:19.095 19:28:05 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:19.095 19:28:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.095 00:07:19.095 real 0m3.120s 00:07:19.095 user 0m2.638s 00:07:19.095 sys 0m0.282s 00:07:19.095 19:28:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:19.095 ************************************ 00:07:19.095 END TEST accel_dif_verify 00:07:19.095 ************************************ 00:07:19.095 19:28:05 -- common/autotest_common.sh@10 -- # set +x 00:07:19.095 19:28:05 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:19.095 19:28:05 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:19.095 19:28:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:19.095 19:28:05 -- common/autotest_common.sh@10 -- # set +x 00:07:19.095 ************************************ 00:07:19.095 START TEST accel_dif_generate 00:07:19.095 ************************************ 00:07:19.095 19:28:05 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:07:19.095 19:28:05 -- accel/accel.sh@16 -- # local accel_opc 00:07:19.095 19:28:05 -- accel/accel.sh@17 -- # local accel_module 00:07:19.095 19:28:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:19.095 19:28:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:19.095 19:28:05 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.095 19:28:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.095 19:28:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.095 19:28:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.095 19:28:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.095 19:28:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.095 19:28:05 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.095 19:28:05 -- accel/accel.sh@42 -- # jq -r . 00:07:19.095 [2024-12-15 19:28:05.704855] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:19.095 [2024-12-15 19:28:05.704949] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70800 ] 00:07:19.095 [2024-12-15 19:28:05.839955] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.095 [2024-12-15 19:28:05.908282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.471 19:28:07 -- accel/accel.sh@18 -- # out=' 00:07:20.472 SPDK Configuration: 00:07:20.472 Core mask: 0x1 00:07:20.472 00:07:20.472 Accel Perf Configuration: 00:07:20.472 Workload Type: dif_generate 00:07:20.472 Vector size: 4096 bytes 00:07:20.472 Transfer size: 4096 bytes 00:07:20.472 Block size: 512 bytes 00:07:20.472 Metadata size: 8 bytes 00:07:20.472 Vector count 1 00:07:20.472 Module: software 00:07:20.472 Queue depth: 32 00:07:20.472 Allocate depth: 32 00:07:20.472 # threads/core: 1 00:07:20.472 Run time: 1 seconds 00:07:20.472 Verify: No 00:07:20.472 00:07:20.472 Running for 1 seconds... 00:07:20.472 00:07:20.472 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:20.472 ------------------------------------------------------------------------------------ 00:07:20.472 0,0 150336/s 596 MiB/s 0 0 00:07:20.472 ==================================================================================== 00:07:20.472 Total 150336/s 587 MiB/s 0 0' 00:07:20.472 19:28:07 -- accel/accel.sh@20 -- # IFS=: 00:07:20.472 19:28:07 -- accel/accel.sh@20 -- # read -r var val 00:07:20.472 19:28:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:20.472 19:28:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:20.472 19:28:07 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.472 19:28:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.472 19:28:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.472 19:28:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.472 19:28:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.472 19:28:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.472 19:28:07 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.472 19:28:07 -- accel/accel.sh@42 -- # jq -r . 00:07:20.472 [2024-12-15 19:28:07.186395] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:20.472 [2024-12-15 19:28:07.186675] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70814 ] 00:07:20.472 [2024-12-15 19:28:07.317490] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.731 [2024-12-15 19:28:07.381357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.731 19:28:07 -- accel/accel.sh@21 -- # val= 00:07:20.731 19:28:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.731 19:28:07 -- accel/accel.sh@20 -- # IFS=: 00:07:20.731 19:28:07 -- accel/accel.sh@20 -- # read -r var val 00:07:20.731 19:28:07 -- accel/accel.sh@21 -- # val= 00:07:20.731 19:28:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.731 19:28:07 -- accel/accel.sh@20 -- # IFS=: 00:07:20.731 19:28:07 -- accel/accel.sh@20 -- # read -r var val 00:07:20.731 19:28:07 -- accel/accel.sh@21 -- # val=0x1 00:07:20.731 19:28:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.731 19:28:07 -- accel/accel.sh@20 -- # IFS=: 00:07:20.731 19:28:07 -- accel/accel.sh@20 -- # read -r var val 00:07:20.731 19:28:07 -- accel/accel.sh@21 -- # val= 00:07:20.731 19:28:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.731 19:28:07 -- accel/accel.sh@20 -- # IFS=: 00:07:20.731 19:28:07 -- accel/accel.sh@20 -- # read -r var val 00:07:20.731 19:28:07 -- accel/accel.sh@21 -- # val= 00:07:20.731 19:28:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.731 19:28:07 -- accel/accel.sh@20 -- # IFS=: 00:07:20.731 19:28:07 -- accel/accel.sh@20 -- # read -r var val 00:07:20.731 19:28:07 -- accel/accel.sh@21 -- # val=dif_generate 00:07:20.731 19:28:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.731 19:28:07 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:20.731 19:28:07 -- accel/accel.sh@20 -- # IFS=: 00:07:20.731 19:28:07 -- accel/accel.sh@20 -- # read -r var val 00:07:20.731 19:28:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:20.731 19:28:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.731 19:28:07 -- accel/accel.sh@20 -- # IFS=: 00:07:20.731 19:28:07 -- accel/accel.sh@20 -- # read -r var val 00:07:20.731 19:28:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:20.731 19:28:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.731 19:28:07 -- accel/accel.sh@20 -- # IFS=: 00:07:20.731 19:28:07 -- accel/accel.sh@20 -- # read -r var val 00:07:20.731 19:28:07 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:20.731 19:28:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.731 19:28:07 -- accel/accel.sh@20 -- # IFS=: 00:07:20.731 19:28:07 -- accel/accel.sh@20 -- # read -r var val 00:07:20.731 19:28:07 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:20.731 19:28:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.731 19:28:07 -- accel/accel.sh@20 -- # IFS=: 00:07:20.731 19:28:07 -- accel/accel.sh@20 -- # read -r var val 00:07:20.731 19:28:07 -- accel/accel.sh@21 -- # val= 00:07:20.731 19:28:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.731 19:28:07 -- accel/accel.sh@20 -- # IFS=: 00:07:20.731 19:28:07 -- accel/accel.sh@20 -- # read -r var val 00:07:20.731 19:28:07 -- accel/accel.sh@21 -- # val=software 00:07:20.731 19:28:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.731 19:28:07 -- accel/accel.sh@23 -- # accel_module=software 00:07:20.731 19:28:07 -- accel/accel.sh@20 -- # IFS=: 00:07:20.731 19:28:07 -- accel/accel.sh@20 -- # read -r var val 00:07:20.731 19:28:07 -- accel/accel.sh@21 -- # val=32 00:07:20.731 19:28:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.731 19:28:07 -- accel/accel.sh@20 -- # IFS=: 00:07:20.731 19:28:07 -- accel/accel.sh@20 -- # read -r var val 00:07:20.731 19:28:07 -- accel/accel.sh@21 -- # val=32 00:07:20.731 19:28:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.731 19:28:07 -- accel/accel.sh@20 -- # IFS=: 00:07:20.731 19:28:07 -- accel/accel.sh@20 -- # read -r var val 00:07:20.731 19:28:07 -- accel/accel.sh@21 -- # val=1 00:07:20.731 19:28:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.731 19:28:07 -- accel/accel.sh@20 -- # IFS=: 00:07:20.731 19:28:07 -- accel/accel.sh@20 -- # read -r var val 00:07:20.731 19:28:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:20.731 19:28:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.731 19:28:07 -- accel/accel.sh@20 -- # IFS=: 00:07:20.731 19:28:07 -- accel/accel.sh@20 -- # read -r var val 00:07:20.731 19:28:07 -- accel/accel.sh@21 -- # val=No 00:07:20.731 19:28:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.731 19:28:07 -- accel/accel.sh@20 -- # IFS=: 00:07:20.731 19:28:07 -- accel/accel.sh@20 -- # read -r var val 00:07:20.731 19:28:07 -- accel/accel.sh@21 -- # val= 00:07:20.731 19:28:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.731 19:28:07 -- accel/accel.sh@20 -- # IFS=: 00:07:20.731 19:28:07 -- accel/accel.sh@20 -- # read -r var val 00:07:20.731 19:28:07 -- accel/accel.sh@21 -- # val= 00:07:20.731 19:28:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.731 19:28:07 -- accel/accel.sh@20 -- # IFS=: 00:07:20.731 19:28:07 -- accel/accel.sh@20 -- # read -r var val 00:07:22.111 19:28:08 -- accel/accel.sh@21 -- # val= 00:07:22.111 19:28:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.111 19:28:08 -- accel/accel.sh@20 -- # IFS=: 00:07:22.111 19:28:08 -- accel/accel.sh@20 -- # read -r var val 00:07:22.111 19:28:08 -- accel/accel.sh@21 -- # val= 00:07:22.111 19:28:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.111 19:28:08 -- accel/accel.sh@20 -- # IFS=: 00:07:22.111 19:28:08 -- accel/accel.sh@20 -- # read -r var val 00:07:22.111 19:28:08 -- accel/accel.sh@21 -- # val= 00:07:22.111 19:28:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.111 19:28:08 -- accel/accel.sh@20 -- # IFS=: 00:07:22.111 19:28:08 -- accel/accel.sh@20 -- # read -r var val 00:07:22.111 19:28:08 -- accel/accel.sh@21 -- # val= 00:07:22.111 ************************************ 00:07:22.111 END TEST accel_dif_generate 00:07:22.111 ************************************ 00:07:22.111 19:28:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.111 19:28:08 -- accel/accel.sh@20 -- # IFS=: 00:07:22.111 19:28:08 -- accel/accel.sh@20 -- # read -r var val 00:07:22.111 19:28:08 -- accel/accel.sh@21 -- # val= 00:07:22.111 19:28:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.111 19:28:08 -- accel/accel.sh@20 -- # IFS=: 00:07:22.111 19:28:08 -- accel/accel.sh@20 -- # read -r var val 00:07:22.111 19:28:08 -- accel/accel.sh@21 -- # val= 00:07:22.111 19:28:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.111 19:28:08 -- accel/accel.sh@20 -- # IFS=: 00:07:22.111 19:28:08 -- accel/accel.sh@20 -- # read -r var val 00:07:22.111 19:28:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:22.111 19:28:08 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:22.111 19:28:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.111 00:07:22.111 real 0m2.981s 00:07:22.111 user 0m2.505s 00:07:22.111 sys 0m0.277s 00:07:22.111 19:28:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:22.111 19:28:08 -- common/autotest_common.sh@10 -- # set +x 00:07:22.111 19:28:08 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:22.111 19:28:08 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:22.111 19:28:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:22.111 19:28:08 -- common/autotest_common.sh@10 -- # set +x 00:07:22.111 ************************************ 00:07:22.111 START TEST accel_dif_generate_copy 00:07:22.111 ************************************ 00:07:22.111 19:28:08 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:07:22.111 19:28:08 -- accel/accel.sh@16 -- # local accel_opc 00:07:22.111 19:28:08 -- accel/accel.sh@17 -- # local accel_module 00:07:22.111 19:28:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:22.111 19:28:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:22.111 19:28:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.111 19:28:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:22.111 19:28:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.111 19:28:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.111 19:28:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:22.111 19:28:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:22.111 19:28:08 -- accel/accel.sh@41 -- # local IFS=, 00:07:22.111 19:28:08 -- accel/accel.sh@42 -- # jq -r . 00:07:22.111 [2024-12-15 19:28:08.748190] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:22.111 [2024-12-15 19:28:08.748302] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70854 ] 00:07:22.111 [2024-12-15 19:28:08.883316] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.111 [2024-12-15 19:28:08.944653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.488 19:28:10 -- accel/accel.sh@18 -- # out=' 00:07:23.488 SPDK Configuration: 00:07:23.488 Core mask: 0x1 00:07:23.488 00:07:23.488 Accel Perf Configuration: 00:07:23.488 Workload Type: dif_generate_copy 00:07:23.488 Vector size: 4096 bytes 00:07:23.488 Transfer size: 4096 bytes 00:07:23.488 Vector count 1 00:07:23.488 Module: software 00:07:23.488 Queue depth: 32 00:07:23.488 Allocate depth: 32 00:07:23.488 # threads/core: 1 00:07:23.488 Run time: 1 seconds 00:07:23.488 Verify: No 00:07:23.488 00:07:23.488 Running for 1 seconds... 00:07:23.488 00:07:23.488 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:23.488 ------------------------------------------------------------------------------------ 00:07:23.488 0,0 117728/s 467 MiB/s 0 0 00:07:23.488 ==================================================================================== 00:07:23.488 Total 117728/s 459 MiB/s 0 0' 00:07:23.488 19:28:10 -- accel/accel.sh@20 -- # IFS=: 00:07:23.488 19:28:10 -- accel/accel.sh@20 -- # read -r var val 00:07:23.488 19:28:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:23.488 19:28:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:23.488 19:28:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.488 19:28:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.488 19:28:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.488 19:28:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.488 19:28:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.488 19:28:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.488 19:28:10 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.488 19:28:10 -- accel/accel.sh@42 -- # jq -r . 00:07:23.488 [2024-12-15 19:28:10.223688] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:23.488 [2024-12-15 19:28:10.223780] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70868 ] 00:07:23.488 [2024-12-15 19:28:10.358288] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.747 [2024-12-15 19:28:10.414910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.747 19:28:10 -- accel/accel.sh@21 -- # val= 00:07:23.747 19:28:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.747 19:28:10 -- accel/accel.sh@20 -- # IFS=: 00:07:23.747 19:28:10 -- accel/accel.sh@20 -- # read -r var val 00:07:23.747 19:28:10 -- accel/accel.sh@21 -- # val= 00:07:23.747 19:28:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.747 19:28:10 -- accel/accel.sh@20 -- # IFS=: 00:07:23.747 19:28:10 -- accel/accel.sh@20 -- # read -r var val 00:07:23.747 19:28:10 -- accel/accel.sh@21 -- # val=0x1 00:07:23.747 19:28:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.747 19:28:10 -- accel/accel.sh@20 -- # IFS=: 00:07:23.747 19:28:10 -- accel/accel.sh@20 -- # read -r var val 00:07:23.747 19:28:10 -- accel/accel.sh@21 -- # val= 00:07:23.747 19:28:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.747 19:28:10 -- accel/accel.sh@20 -- # IFS=: 00:07:23.747 19:28:10 -- accel/accel.sh@20 -- # read -r var val 00:07:23.747 19:28:10 -- accel/accel.sh@21 -- # val= 00:07:23.747 19:28:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.747 19:28:10 -- accel/accel.sh@20 -- # IFS=: 00:07:23.747 19:28:10 -- accel/accel.sh@20 -- # read -r var val 00:07:23.747 19:28:10 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:23.747 19:28:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.747 19:28:10 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:23.747 19:28:10 -- accel/accel.sh@20 -- # IFS=: 00:07:23.747 19:28:10 -- accel/accel.sh@20 -- # read -r var val 00:07:23.747 19:28:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:23.747 19:28:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.747 19:28:10 -- accel/accel.sh@20 -- # IFS=: 00:07:23.747 19:28:10 -- accel/accel.sh@20 -- # read -r var val 00:07:23.747 19:28:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:23.747 19:28:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.747 19:28:10 -- accel/accel.sh@20 -- # IFS=: 00:07:23.747 19:28:10 -- accel/accel.sh@20 -- # read -r var val 00:07:23.747 19:28:10 -- accel/accel.sh@21 -- # val= 00:07:23.747 19:28:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.747 19:28:10 -- accel/accel.sh@20 -- # IFS=: 00:07:23.747 19:28:10 -- accel/accel.sh@20 -- # read -r var val 00:07:23.747 19:28:10 -- accel/accel.sh@21 -- # val=software 00:07:23.747 19:28:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.747 19:28:10 -- accel/accel.sh@23 -- # accel_module=software 00:07:23.747 19:28:10 -- accel/accel.sh@20 -- # IFS=: 00:07:23.747 19:28:10 -- accel/accel.sh@20 -- # read -r var val 00:07:23.747 19:28:10 -- accel/accel.sh@21 -- # val=32 00:07:23.747 19:28:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.747 19:28:10 -- accel/accel.sh@20 -- # IFS=: 00:07:23.747 19:28:10 -- accel/accel.sh@20 -- # read -r var val 00:07:23.747 19:28:10 -- accel/accel.sh@21 -- # val=32 00:07:23.747 19:28:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.747 19:28:10 -- accel/accel.sh@20 -- # IFS=: 00:07:23.747 19:28:10 -- accel/accel.sh@20 -- # read -r var val 00:07:23.747 19:28:10 -- accel/accel.sh@21 -- # val=1 00:07:23.747 19:28:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.747 19:28:10 -- accel/accel.sh@20 -- # IFS=: 00:07:23.747 19:28:10 -- accel/accel.sh@20 -- # read -r var val 00:07:23.747 19:28:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:23.747 19:28:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.747 19:28:10 -- accel/accel.sh@20 -- # IFS=: 00:07:23.747 19:28:10 -- accel/accel.sh@20 -- # read -r var val 00:07:23.747 19:28:10 -- accel/accel.sh@21 -- # val=No 00:07:23.747 19:28:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.747 19:28:10 -- accel/accel.sh@20 -- # IFS=: 00:07:23.747 19:28:10 -- accel/accel.sh@20 -- # read -r var val 00:07:23.747 19:28:10 -- accel/accel.sh@21 -- # val= 00:07:23.747 19:28:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.747 19:28:10 -- accel/accel.sh@20 -- # IFS=: 00:07:23.747 19:28:10 -- accel/accel.sh@20 -- # read -r var val 00:07:23.747 19:28:10 -- accel/accel.sh@21 -- # val= 00:07:23.747 19:28:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.747 19:28:10 -- accel/accel.sh@20 -- # IFS=: 00:07:23.747 19:28:10 -- accel/accel.sh@20 -- # read -r var val 00:07:25.123 19:28:11 -- accel/accel.sh@21 -- # val= 00:07:25.123 19:28:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.123 19:28:11 -- accel/accel.sh@20 -- # IFS=: 00:07:25.123 19:28:11 -- accel/accel.sh@20 -- # read -r var val 00:07:25.123 19:28:11 -- accel/accel.sh@21 -- # val= 00:07:25.123 19:28:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.123 19:28:11 -- accel/accel.sh@20 -- # IFS=: 00:07:25.123 19:28:11 -- accel/accel.sh@20 -- # read -r var val 00:07:25.123 19:28:11 -- accel/accel.sh@21 -- # val= 00:07:25.123 19:28:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.123 19:28:11 -- accel/accel.sh@20 -- # IFS=: 00:07:25.123 19:28:11 -- accel/accel.sh@20 -- # read -r var val 00:07:25.123 19:28:11 -- accel/accel.sh@21 -- # val= 00:07:25.123 19:28:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.123 19:28:11 -- accel/accel.sh@20 -- # IFS=: 00:07:25.123 19:28:11 -- accel/accel.sh@20 -- # read -r var val 00:07:25.123 19:28:11 -- accel/accel.sh@21 -- # val= 00:07:25.123 19:28:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.123 19:28:11 -- accel/accel.sh@20 -- # IFS=: 00:07:25.123 19:28:11 -- accel/accel.sh@20 -- # read -r var val 00:07:25.123 19:28:11 -- accel/accel.sh@21 -- # val= 00:07:25.123 19:28:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.123 19:28:11 -- accel/accel.sh@20 -- # IFS=: 00:07:25.123 19:28:11 -- accel/accel.sh@20 -- # read -r var val 00:07:25.123 ************************************ 00:07:25.123 END TEST accel_dif_generate_copy 00:07:25.123 ************************************ 00:07:25.123 19:28:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:25.123 19:28:11 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:25.123 19:28:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.123 00:07:25.123 real 0m2.957s 00:07:25.123 user 0m2.484s 00:07:25.123 sys 0m0.272s 00:07:25.123 19:28:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:25.123 19:28:11 -- common/autotest_common.sh@10 -- # set +x 00:07:25.123 19:28:11 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:25.123 19:28:11 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:25.123 19:28:11 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:25.123 19:28:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:25.123 19:28:11 -- common/autotest_common.sh@10 -- # set +x 00:07:25.123 ************************************ 00:07:25.123 START TEST accel_comp 00:07:25.123 ************************************ 00:07:25.123 19:28:11 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:25.123 19:28:11 -- accel/accel.sh@16 -- # local accel_opc 00:07:25.123 19:28:11 -- accel/accel.sh@17 -- # local accel_module 00:07:25.123 19:28:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:25.123 19:28:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:25.123 19:28:11 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.123 19:28:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.123 19:28:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.123 19:28:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.123 19:28:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.123 19:28:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.123 19:28:11 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.123 19:28:11 -- accel/accel.sh@42 -- # jq -r . 00:07:25.123 [2024-12-15 19:28:11.755459] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:25.123 [2024-12-15 19:28:11.755560] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70908 ] 00:07:25.123 [2024-12-15 19:28:11.882663] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.123 [2024-12-15 19:28:11.944331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.513 19:28:13 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:26.513 00:07:26.513 SPDK Configuration: 00:07:26.513 Core mask: 0x1 00:07:26.513 00:07:26.513 Accel Perf Configuration: 00:07:26.513 Workload Type: compress 00:07:26.513 Transfer size: 4096 bytes 00:07:26.513 Vector count 1 00:07:26.513 Module: software 00:07:26.513 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:26.513 Queue depth: 32 00:07:26.513 Allocate depth: 32 00:07:26.513 # threads/core: 1 00:07:26.513 Run time: 1 seconds 00:07:26.513 Verify: No 00:07:26.513 00:07:26.513 Running for 1 seconds... 00:07:26.513 00:07:26.513 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:26.513 ------------------------------------------------------------------------------------ 00:07:26.513 0,0 59552/s 248 MiB/s 0 0 00:07:26.513 ==================================================================================== 00:07:26.513 Total 59552/s 232 MiB/s 0 0' 00:07:26.513 19:28:13 -- accel/accel.sh@20 -- # IFS=: 00:07:26.513 19:28:13 -- accel/accel.sh@20 -- # read -r var val 00:07:26.513 19:28:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:26.513 19:28:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:26.513 19:28:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.513 19:28:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.513 19:28:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.513 19:28:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.513 19:28:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.513 19:28:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.513 19:28:13 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.513 19:28:13 -- accel/accel.sh@42 -- # jq -r . 00:07:26.513 [2024-12-15 19:28:13.252771] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:26.513 [2024-12-15 19:28:13.252921] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70922 ] 00:07:26.513 [2024-12-15 19:28:13.390834] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.785 [2024-12-15 19:28:13.446476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.785 19:28:13 -- accel/accel.sh@21 -- # val= 00:07:26.785 19:28:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.785 19:28:13 -- accel/accel.sh@20 -- # IFS=: 00:07:26.785 19:28:13 -- accel/accel.sh@20 -- # read -r var val 00:07:26.785 19:28:13 -- accel/accel.sh@21 -- # val= 00:07:26.785 19:28:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.785 19:28:13 -- accel/accel.sh@20 -- # IFS=: 00:07:26.785 19:28:13 -- accel/accel.sh@20 -- # read -r var val 00:07:26.785 19:28:13 -- accel/accel.sh@21 -- # val= 00:07:26.785 19:28:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.785 19:28:13 -- accel/accel.sh@20 -- # IFS=: 00:07:26.785 19:28:13 -- accel/accel.sh@20 -- # read -r var val 00:07:26.785 19:28:13 -- accel/accel.sh@21 -- # val=0x1 00:07:26.785 19:28:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.785 19:28:13 -- accel/accel.sh@20 -- # IFS=: 00:07:26.785 19:28:13 -- accel/accel.sh@20 -- # read -r var val 00:07:26.785 19:28:13 -- accel/accel.sh@21 -- # val= 00:07:26.785 19:28:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.785 19:28:13 -- accel/accel.sh@20 -- # IFS=: 00:07:26.785 19:28:13 -- accel/accel.sh@20 -- # read -r var val 00:07:26.785 19:28:13 -- accel/accel.sh@21 -- # val= 00:07:26.785 19:28:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.785 19:28:13 -- accel/accel.sh@20 -- # IFS=: 00:07:26.785 19:28:13 -- accel/accel.sh@20 -- # read -r var val 00:07:26.785 19:28:13 -- accel/accel.sh@21 -- # val=compress 00:07:26.785 19:28:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.785 19:28:13 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:26.785 19:28:13 -- accel/accel.sh@20 -- # IFS=: 00:07:26.785 19:28:13 -- accel/accel.sh@20 -- # read -r var val 00:07:26.785 19:28:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:26.785 19:28:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.785 19:28:13 -- accel/accel.sh@20 -- # IFS=: 00:07:26.785 19:28:13 -- accel/accel.sh@20 -- # read -r var val 00:07:26.785 19:28:13 -- accel/accel.sh@21 -- # val= 00:07:26.785 19:28:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.785 19:28:13 -- accel/accel.sh@20 -- # IFS=: 00:07:26.785 19:28:13 -- accel/accel.sh@20 -- # read -r var val 00:07:26.785 19:28:13 -- accel/accel.sh@21 -- # val=software 00:07:26.785 19:28:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.785 19:28:13 -- accel/accel.sh@23 -- # accel_module=software 00:07:26.786 19:28:13 -- accel/accel.sh@20 -- # IFS=: 00:07:26.786 19:28:13 -- accel/accel.sh@20 -- # read -r var val 00:07:26.786 19:28:13 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:26.786 19:28:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.786 19:28:13 -- accel/accel.sh@20 -- # IFS=: 00:07:26.786 19:28:13 -- accel/accel.sh@20 -- # read -r var val 00:07:26.786 19:28:13 -- accel/accel.sh@21 -- # val=32 00:07:26.786 19:28:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.786 19:28:13 -- accel/accel.sh@20 -- # IFS=: 00:07:26.786 19:28:13 -- accel/accel.sh@20 -- # read -r var val 00:07:26.786 19:28:13 -- accel/accel.sh@21 -- # val=32 00:07:26.786 19:28:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.786 19:28:13 -- accel/accel.sh@20 -- # IFS=: 00:07:26.786 19:28:13 -- accel/accel.sh@20 -- # read -r var val 00:07:26.786 19:28:13 -- accel/accel.sh@21 -- # val=1 00:07:26.786 19:28:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.786 19:28:13 -- accel/accel.sh@20 -- # IFS=: 00:07:26.786 19:28:13 -- accel/accel.sh@20 -- # read -r var val 00:07:26.786 19:28:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:26.786 19:28:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.786 19:28:13 -- accel/accel.sh@20 -- # IFS=: 00:07:26.786 19:28:13 -- accel/accel.sh@20 -- # read -r var val 00:07:26.786 19:28:13 -- accel/accel.sh@21 -- # val=No 00:07:26.786 19:28:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.786 19:28:13 -- accel/accel.sh@20 -- # IFS=: 00:07:26.786 19:28:13 -- accel/accel.sh@20 -- # read -r var val 00:07:26.786 19:28:13 -- accel/accel.sh@21 -- # val= 00:07:26.786 19:28:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.786 19:28:13 -- accel/accel.sh@20 -- # IFS=: 00:07:26.786 19:28:13 -- accel/accel.sh@20 -- # read -r var val 00:07:26.786 19:28:13 -- accel/accel.sh@21 -- # val= 00:07:26.786 19:28:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.786 19:28:13 -- accel/accel.sh@20 -- # IFS=: 00:07:26.786 19:28:13 -- accel/accel.sh@20 -- # read -r var val 00:07:28.164 19:28:14 -- accel/accel.sh@21 -- # val= 00:07:28.164 19:28:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.164 19:28:14 -- accel/accel.sh@20 -- # IFS=: 00:07:28.164 19:28:14 -- accel/accel.sh@20 -- # read -r var val 00:07:28.164 19:28:14 -- accel/accel.sh@21 -- # val= 00:07:28.165 19:28:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.165 19:28:14 -- accel/accel.sh@20 -- # IFS=: 00:07:28.165 19:28:14 -- accel/accel.sh@20 -- # read -r var val 00:07:28.165 19:28:14 -- accel/accel.sh@21 -- # val= 00:07:28.165 19:28:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.165 19:28:14 -- accel/accel.sh@20 -- # IFS=: 00:07:28.165 19:28:14 -- accel/accel.sh@20 -- # read -r var val 00:07:28.165 19:28:14 -- accel/accel.sh@21 -- # val= 00:07:28.165 19:28:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.165 19:28:14 -- accel/accel.sh@20 -- # IFS=: 00:07:28.165 19:28:14 -- accel/accel.sh@20 -- # read -r var val 00:07:28.165 19:28:14 -- accel/accel.sh@21 -- # val= 00:07:28.165 19:28:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.165 19:28:14 -- accel/accel.sh@20 -- # IFS=: 00:07:28.165 19:28:14 -- accel/accel.sh@20 -- # read -r var val 00:07:28.165 19:28:14 -- accel/accel.sh@21 -- # val= 00:07:28.165 19:28:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.165 19:28:14 -- accel/accel.sh@20 -- # IFS=: 00:07:28.165 19:28:14 -- accel/accel.sh@20 -- # read -r var val 00:07:28.165 19:28:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:28.165 19:28:14 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:28.165 19:28:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.165 00:07:28.165 real 0m2.979s 00:07:28.165 user 0m2.513s 00:07:28.165 sys 0m0.263s 00:07:28.165 19:28:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:28.165 19:28:14 -- common/autotest_common.sh@10 -- # set +x 00:07:28.165 ************************************ 00:07:28.165 END TEST accel_comp 00:07:28.165 ************************************ 00:07:28.165 19:28:14 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:28.165 19:28:14 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:28.165 19:28:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:28.165 19:28:14 -- common/autotest_common.sh@10 -- # set +x 00:07:28.165 ************************************ 00:07:28.165 START TEST accel_decomp 00:07:28.165 ************************************ 00:07:28.165 19:28:14 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:28.165 19:28:14 -- accel/accel.sh@16 -- # local accel_opc 00:07:28.165 19:28:14 -- accel/accel.sh@17 -- # local accel_module 00:07:28.165 19:28:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:28.165 19:28:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:28.165 19:28:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.165 19:28:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.165 19:28:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.165 19:28:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.165 19:28:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.165 19:28:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.165 19:28:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.165 19:28:14 -- accel/accel.sh@42 -- # jq -r . 00:07:28.165 [2024-12-15 19:28:14.787228] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:28.165 [2024-12-15 19:28:14.787322] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70962 ] 00:07:28.165 [2024-12-15 19:28:14.923219] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.165 [2024-12-15 19:28:14.984781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.555 19:28:16 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:29.555 00:07:29.555 SPDK Configuration: 00:07:29.555 Core mask: 0x1 00:07:29.555 00:07:29.555 Accel Perf Configuration: 00:07:29.555 Workload Type: decompress 00:07:29.555 Transfer size: 4096 bytes 00:07:29.555 Vector count 1 00:07:29.555 Module: software 00:07:29.555 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:29.555 Queue depth: 32 00:07:29.555 Allocate depth: 32 00:07:29.555 # threads/core: 1 00:07:29.555 Run time: 1 seconds 00:07:29.555 Verify: Yes 00:07:29.555 00:07:29.555 Running for 1 seconds... 00:07:29.555 00:07:29.555 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:29.555 ------------------------------------------------------------------------------------ 00:07:29.555 0,0 82944/s 152 MiB/s 0 0 00:07:29.555 ==================================================================================== 00:07:29.555 Total 82944/s 324 MiB/s 0 0' 00:07:29.555 19:28:16 -- accel/accel.sh@20 -- # IFS=: 00:07:29.555 19:28:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:29.555 19:28:16 -- accel/accel.sh@20 -- # read -r var val 00:07:29.555 19:28:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:29.555 19:28:16 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.555 19:28:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:29.555 19:28:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.555 19:28:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.555 19:28:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:29.555 19:28:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:29.555 19:28:16 -- accel/accel.sh@41 -- # local IFS=, 00:07:29.555 19:28:16 -- accel/accel.sh@42 -- # jq -r . 00:07:29.555 [2024-12-15 19:28:16.314361] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:29.555 [2024-12-15 19:28:16.314472] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70976 ] 00:07:29.813 [2024-12-15 19:28:16.450393] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.813 [2024-12-15 19:28:16.571812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.813 19:28:16 -- accel/accel.sh@21 -- # val= 00:07:29.813 19:28:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.813 19:28:16 -- accel/accel.sh@20 -- # IFS=: 00:07:29.813 19:28:16 -- accel/accel.sh@20 -- # read -r var val 00:07:29.813 19:28:16 -- accel/accel.sh@21 -- # val= 00:07:29.813 19:28:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.813 19:28:16 -- accel/accel.sh@20 -- # IFS=: 00:07:29.813 19:28:16 -- accel/accel.sh@20 -- # read -r var val 00:07:29.813 19:28:16 -- accel/accel.sh@21 -- # val= 00:07:29.813 19:28:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.813 19:28:16 -- accel/accel.sh@20 -- # IFS=: 00:07:29.813 19:28:16 -- accel/accel.sh@20 -- # read -r var val 00:07:29.813 19:28:16 -- accel/accel.sh@21 -- # val=0x1 00:07:29.813 19:28:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.813 19:28:16 -- accel/accel.sh@20 -- # IFS=: 00:07:29.813 19:28:16 -- accel/accel.sh@20 -- # read -r var val 00:07:29.813 19:28:16 -- accel/accel.sh@21 -- # val= 00:07:29.813 19:28:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.813 19:28:16 -- accel/accel.sh@20 -- # IFS=: 00:07:29.813 19:28:16 -- accel/accel.sh@20 -- # read -r var val 00:07:29.813 19:28:16 -- accel/accel.sh@21 -- # val= 00:07:29.813 19:28:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.813 19:28:16 -- accel/accel.sh@20 -- # IFS=: 00:07:29.813 19:28:16 -- accel/accel.sh@20 -- # read -r var val 00:07:29.813 19:28:16 -- accel/accel.sh@21 -- # val=decompress 00:07:29.813 19:28:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.813 19:28:16 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:29.813 19:28:16 -- accel/accel.sh@20 -- # IFS=: 00:07:29.813 19:28:16 -- accel/accel.sh@20 -- # read -r var val 00:07:29.813 19:28:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:29.813 19:28:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.813 19:28:16 -- accel/accel.sh@20 -- # IFS=: 00:07:29.814 19:28:16 -- accel/accel.sh@20 -- # read -r var val 00:07:29.814 19:28:16 -- accel/accel.sh@21 -- # val= 00:07:29.814 19:28:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.814 19:28:16 -- accel/accel.sh@20 -- # IFS=: 00:07:29.814 19:28:16 -- accel/accel.sh@20 -- # read -r var val 00:07:29.814 19:28:16 -- accel/accel.sh@21 -- # val=software 00:07:29.814 19:28:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.814 19:28:16 -- accel/accel.sh@23 -- # accel_module=software 00:07:29.814 19:28:16 -- accel/accel.sh@20 -- # IFS=: 00:07:29.814 19:28:16 -- accel/accel.sh@20 -- # read -r var val 00:07:29.814 19:28:16 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:29.814 19:28:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.814 19:28:16 -- accel/accel.sh@20 -- # IFS=: 00:07:29.814 19:28:16 -- accel/accel.sh@20 -- # read -r var val 00:07:29.814 19:28:16 -- accel/accel.sh@21 -- # val=32 00:07:29.814 19:28:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.814 19:28:16 -- accel/accel.sh@20 -- # IFS=: 00:07:29.814 19:28:16 -- accel/accel.sh@20 -- # read -r var val 00:07:29.814 19:28:16 -- accel/accel.sh@21 -- # val=32 00:07:29.814 19:28:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.814 19:28:16 -- accel/accel.sh@20 -- # IFS=: 00:07:29.814 19:28:16 -- accel/accel.sh@20 -- # read -r var val 00:07:29.814 19:28:16 -- accel/accel.sh@21 -- # val=1 00:07:29.814 19:28:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.814 19:28:16 -- accel/accel.sh@20 -- # IFS=: 00:07:29.814 19:28:16 -- accel/accel.sh@20 -- # read -r var val 00:07:29.814 19:28:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:29.814 19:28:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.814 19:28:16 -- accel/accel.sh@20 -- # IFS=: 00:07:29.814 19:28:16 -- accel/accel.sh@20 -- # read -r var val 00:07:29.814 19:28:16 -- accel/accel.sh@21 -- # val=Yes 00:07:29.814 19:28:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.814 19:28:16 -- accel/accel.sh@20 -- # IFS=: 00:07:29.814 19:28:16 -- accel/accel.sh@20 -- # read -r var val 00:07:29.814 19:28:16 -- accel/accel.sh@21 -- # val= 00:07:29.814 19:28:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.814 19:28:16 -- accel/accel.sh@20 -- # IFS=: 00:07:29.814 19:28:16 -- accel/accel.sh@20 -- # read -r var val 00:07:29.814 19:28:16 -- accel/accel.sh@21 -- # val= 00:07:29.814 19:28:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.814 19:28:16 -- accel/accel.sh@20 -- # IFS=: 00:07:29.814 19:28:16 -- accel/accel.sh@20 -- # read -r var val 00:07:31.193 19:28:17 -- accel/accel.sh@21 -- # val= 00:07:31.193 19:28:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.193 19:28:17 -- accel/accel.sh@20 -- # IFS=: 00:07:31.193 19:28:17 -- accel/accel.sh@20 -- # read -r var val 00:07:31.193 19:28:17 -- accel/accel.sh@21 -- # val= 00:07:31.193 19:28:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.193 19:28:17 -- accel/accel.sh@20 -- # IFS=: 00:07:31.193 19:28:17 -- accel/accel.sh@20 -- # read -r var val 00:07:31.193 19:28:17 -- accel/accel.sh@21 -- # val= 00:07:31.193 19:28:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.193 19:28:17 -- accel/accel.sh@20 -- # IFS=: 00:07:31.193 19:28:17 -- accel/accel.sh@20 -- # read -r var val 00:07:31.193 19:28:17 -- accel/accel.sh@21 -- # val= 00:07:31.193 19:28:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.193 19:28:17 -- accel/accel.sh@20 -- # IFS=: 00:07:31.193 19:28:17 -- accel/accel.sh@20 -- # read -r var val 00:07:31.193 19:28:17 -- accel/accel.sh@21 -- # val= 00:07:31.193 19:28:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.193 19:28:17 -- accel/accel.sh@20 -- # IFS=: 00:07:31.193 19:28:17 -- accel/accel.sh@20 -- # read -r var val 00:07:31.193 19:28:17 -- accel/accel.sh@21 -- # val= 00:07:31.193 19:28:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.193 19:28:17 -- accel/accel.sh@20 -- # IFS=: 00:07:31.193 19:28:17 -- accel/accel.sh@20 -- # read -r var val 00:07:31.193 19:28:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:31.193 19:28:17 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:31.193 19:28:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.193 00:07:31.193 real 0m3.103s 00:07:31.193 user 0m2.627s 00:07:31.193 sys 0m0.275s 00:07:31.193 19:28:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:31.193 ************************************ 00:07:31.193 19:28:17 -- common/autotest_common.sh@10 -- # set +x 00:07:31.193 END TEST accel_decomp 00:07:31.193 ************************************ 00:07:31.193 19:28:17 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:31.193 19:28:17 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:31.193 19:28:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:31.193 19:28:17 -- common/autotest_common.sh@10 -- # set +x 00:07:31.193 ************************************ 00:07:31.193 START TEST accel_decmop_full 00:07:31.193 ************************************ 00:07:31.193 19:28:17 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:31.194 19:28:17 -- accel/accel.sh@16 -- # local accel_opc 00:07:31.194 19:28:17 -- accel/accel.sh@17 -- # local accel_module 00:07:31.194 19:28:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:31.194 19:28:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:31.194 19:28:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.194 19:28:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.194 19:28:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.194 19:28:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.194 19:28:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.194 19:28:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.194 19:28:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.194 19:28:17 -- accel/accel.sh@42 -- # jq -r . 00:07:31.194 [2024-12-15 19:28:17.934348] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:31.194 [2024-12-15 19:28:17.934427] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71016 ] 00:07:31.194 [2024-12-15 19:28:18.060418] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.452 [2024-12-15 19:28:18.121948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.828 19:28:19 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:32.828 00:07:32.828 SPDK Configuration: 00:07:32.828 Core mask: 0x1 00:07:32.828 00:07:32.828 Accel Perf Configuration: 00:07:32.828 Workload Type: decompress 00:07:32.828 Transfer size: 111250 bytes 00:07:32.828 Vector count 1 00:07:32.828 Module: software 00:07:32.828 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:32.828 Queue depth: 32 00:07:32.828 Allocate depth: 32 00:07:32.828 # threads/core: 1 00:07:32.828 Run time: 1 seconds 00:07:32.828 Verify: Yes 00:07:32.828 00:07:32.828 Running for 1 seconds... 00:07:32.828 00:07:32.828 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:32.828 ------------------------------------------------------------------------------------ 00:07:32.828 0,0 5696/s 235 MiB/s 0 0 00:07:32.828 ==================================================================================== 00:07:32.828 Total 5696/s 604 MiB/s 0 0' 00:07:32.828 19:28:19 -- accel/accel.sh@20 -- # IFS=: 00:07:32.828 19:28:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:32.828 19:28:19 -- accel/accel.sh@20 -- # read -r var val 00:07:32.828 19:28:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:32.828 19:28:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.828 19:28:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:32.828 19:28:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.828 19:28:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.828 19:28:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:32.828 19:28:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:32.828 19:28:19 -- accel/accel.sh@41 -- # local IFS=, 00:07:32.828 19:28:19 -- accel/accel.sh@42 -- # jq -r . 00:07:32.828 [2024-12-15 19:28:19.404983] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:32.828 [2024-12-15 19:28:19.405081] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71030 ] 00:07:32.828 [2024-12-15 19:28:19.540608] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.828 [2024-12-15 19:28:19.600081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.828 19:28:19 -- accel/accel.sh@21 -- # val= 00:07:32.828 19:28:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.828 19:28:19 -- accel/accel.sh@20 -- # IFS=: 00:07:32.828 19:28:19 -- accel/accel.sh@20 -- # read -r var val 00:07:32.829 19:28:19 -- accel/accel.sh@21 -- # val= 00:07:32.829 19:28:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.829 19:28:19 -- accel/accel.sh@20 -- # IFS=: 00:07:32.829 19:28:19 -- accel/accel.sh@20 -- # read -r var val 00:07:32.829 19:28:19 -- accel/accel.sh@21 -- # val= 00:07:32.829 19:28:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.829 19:28:19 -- accel/accel.sh@20 -- # IFS=: 00:07:32.829 19:28:19 -- accel/accel.sh@20 -- # read -r var val 00:07:32.829 19:28:19 -- accel/accel.sh@21 -- # val=0x1 00:07:32.829 19:28:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.829 19:28:19 -- accel/accel.sh@20 -- # IFS=: 00:07:32.829 19:28:19 -- accel/accel.sh@20 -- # read -r var val 00:07:32.829 19:28:19 -- accel/accel.sh@21 -- # val= 00:07:32.829 19:28:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.829 19:28:19 -- accel/accel.sh@20 -- # IFS=: 00:07:32.829 19:28:19 -- accel/accel.sh@20 -- # read -r var val 00:07:32.829 19:28:19 -- accel/accel.sh@21 -- # val= 00:07:32.829 19:28:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.829 19:28:19 -- accel/accel.sh@20 -- # IFS=: 00:07:32.829 19:28:19 -- accel/accel.sh@20 -- # read -r var val 00:07:32.829 19:28:19 -- accel/accel.sh@21 -- # val=decompress 00:07:32.829 19:28:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.829 19:28:19 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:32.829 19:28:19 -- accel/accel.sh@20 -- # IFS=: 00:07:32.829 19:28:19 -- accel/accel.sh@20 -- # read -r var val 00:07:32.829 19:28:19 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:32.829 19:28:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.829 19:28:19 -- accel/accel.sh@20 -- # IFS=: 00:07:32.829 19:28:19 -- accel/accel.sh@20 -- # read -r var val 00:07:32.829 19:28:19 -- accel/accel.sh@21 -- # val= 00:07:32.829 19:28:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.829 19:28:19 -- accel/accel.sh@20 -- # IFS=: 00:07:32.829 19:28:19 -- accel/accel.sh@20 -- # read -r var val 00:07:32.829 19:28:19 -- accel/accel.sh@21 -- # val=software 00:07:32.829 19:28:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.829 19:28:19 -- accel/accel.sh@23 -- # accel_module=software 00:07:32.829 19:28:19 -- accel/accel.sh@20 -- # IFS=: 00:07:32.829 19:28:19 -- accel/accel.sh@20 -- # read -r var val 00:07:32.829 19:28:19 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:32.829 19:28:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.829 19:28:19 -- accel/accel.sh@20 -- # IFS=: 00:07:32.829 19:28:19 -- accel/accel.sh@20 -- # read -r var val 00:07:32.829 19:28:19 -- accel/accel.sh@21 -- # val=32 00:07:32.829 19:28:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.829 19:28:19 -- accel/accel.sh@20 -- # IFS=: 00:07:32.829 19:28:19 -- accel/accel.sh@20 -- # read -r var val 00:07:32.829 19:28:19 -- accel/accel.sh@21 -- # val=32 00:07:32.829 19:28:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.829 19:28:19 -- accel/accel.sh@20 -- # IFS=: 00:07:32.829 19:28:19 -- accel/accel.sh@20 -- # read -r var val 00:07:32.829 19:28:19 -- accel/accel.sh@21 -- # val=1 00:07:32.829 19:28:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.829 19:28:19 -- accel/accel.sh@20 -- # IFS=: 00:07:32.829 19:28:19 -- accel/accel.sh@20 -- # read -r var val 00:07:32.829 19:28:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:32.829 19:28:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.829 19:28:19 -- accel/accel.sh@20 -- # IFS=: 00:07:32.829 19:28:19 -- accel/accel.sh@20 -- # read -r var val 00:07:32.829 19:28:19 -- accel/accel.sh@21 -- # val=Yes 00:07:32.829 19:28:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.829 19:28:19 -- accel/accel.sh@20 -- # IFS=: 00:07:32.829 19:28:19 -- accel/accel.sh@20 -- # read -r var val 00:07:32.829 19:28:19 -- accel/accel.sh@21 -- # val= 00:07:32.829 19:28:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.829 19:28:19 -- accel/accel.sh@20 -- # IFS=: 00:07:32.829 19:28:19 -- accel/accel.sh@20 -- # read -r var val 00:07:32.829 19:28:19 -- accel/accel.sh@21 -- # val= 00:07:32.829 19:28:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.829 19:28:19 -- accel/accel.sh@20 -- # IFS=: 00:07:32.829 19:28:19 -- accel/accel.sh@20 -- # read -r var val 00:07:34.204 19:28:20 -- accel/accel.sh@21 -- # val= 00:07:34.204 19:28:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.204 19:28:20 -- accel/accel.sh@20 -- # IFS=: 00:07:34.204 19:28:20 -- accel/accel.sh@20 -- # read -r var val 00:07:34.204 19:28:20 -- accel/accel.sh@21 -- # val= 00:07:34.204 19:28:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.204 19:28:20 -- accel/accel.sh@20 -- # IFS=: 00:07:34.204 19:28:20 -- accel/accel.sh@20 -- # read -r var val 00:07:34.204 19:28:20 -- accel/accel.sh@21 -- # val= 00:07:34.204 19:28:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.204 19:28:20 -- accel/accel.sh@20 -- # IFS=: 00:07:34.204 19:28:20 -- accel/accel.sh@20 -- # read -r var val 00:07:34.204 19:28:20 -- accel/accel.sh@21 -- # val= 00:07:34.204 19:28:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.204 19:28:20 -- accel/accel.sh@20 -- # IFS=: 00:07:34.204 19:28:20 -- accel/accel.sh@20 -- # read -r var val 00:07:34.204 19:28:20 -- accel/accel.sh@21 -- # val= 00:07:34.204 19:28:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.204 19:28:20 -- accel/accel.sh@20 -- # IFS=: 00:07:34.204 19:28:20 -- accel/accel.sh@20 -- # read -r var val 00:07:34.204 19:28:20 -- accel/accel.sh@21 -- # val= 00:07:34.204 19:28:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.204 19:28:20 -- accel/accel.sh@20 -- # IFS=: 00:07:34.204 19:28:20 -- accel/accel.sh@20 -- # read -r var val 00:07:34.204 19:28:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:34.204 19:28:20 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:34.204 19:28:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.204 00:07:34.204 real 0m2.951s 00:07:34.204 user 0m2.486s 00:07:34.204 sys 0m0.265s 00:07:34.204 19:28:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:34.204 19:28:20 -- common/autotest_common.sh@10 -- # set +x 00:07:34.204 ************************************ 00:07:34.204 END TEST accel_decmop_full 00:07:34.204 ************************************ 00:07:34.204 19:28:20 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:34.204 19:28:20 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:34.204 19:28:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:34.204 19:28:20 -- common/autotest_common.sh@10 -- # set +x 00:07:34.204 ************************************ 00:07:34.204 START TEST accel_decomp_mcore 00:07:34.204 ************************************ 00:07:34.204 19:28:20 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:34.204 19:28:20 -- accel/accel.sh@16 -- # local accel_opc 00:07:34.204 19:28:20 -- accel/accel.sh@17 -- # local accel_module 00:07:34.204 19:28:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:34.204 19:28:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:34.204 19:28:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.204 19:28:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:34.204 19:28:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.204 19:28:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.204 19:28:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:34.204 19:28:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:34.204 19:28:20 -- accel/accel.sh@41 -- # local IFS=, 00:07:34.204 19:28:20 -- accel/accel.sh@42 -- # jq -r . 00:07:34.204 [2024-12-15 19:28:20.941760] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:34.204 [2024-12-15 19:28:20.941907] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71065 ] 00:07:34.204 [2024-12-15 19:28:21.080180] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:34.463 [2024-12-15 19:28:21.144889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.463 [2024-12-15 19:28:21.145015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.463 [2024-12-15 19:28:21.145152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:34.463 [2024-12-15 19:28:21.145156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.838 19:28:22 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:35.838 00:07:35.838 SPDK Configuration: 00:07:35.838 Core mask: 0xf 00:07:35.838 00:07:35.838 Accel Perf Configuration: 00:07:35.838 Workload Type: decompress 00:07:35.838 Transfer size: 4096 bytes 00:07:35.838 Vector count 1 00:07:35.838 Module: software 00:07:35.838 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:35.838 Queue depth: 32 00:07:35.838 Allocate depth: 32 00:07:35.838 # threads/core: 1 00:07:35.838 Run time: 1 seconds 00:07:35.838 Verify: Yes 00:07:35.838 00:07:35.838 Running for 1 seconds... 00:07:35.838 00:07:35.838 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:35.838 ------------------------------------------------------------------------------------ 00:07:35.838 0,0 57088/s 105 MiB/s 0 0 00:07:35.838 3,0 56096/s 103 MiB/s 0 0 00:07:35.838 2,0 55584/s 102 MiB/s 0 0 00:07:35.838 1,0 55360/s 102 MiB/s 0 0 00:07:35.838 ==================================================================================== 00:07:35.838 Total 224128/s 875 MiB/s 0 0' 00:07:35.838 19:28:22 -- accel/accel.sh@20 -- # IFS=: 00:07:35.838 19:28:22 -- accel/accel.sh@20 -- # read -r var val 00:07:35.838 19:28:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:35.838 19:28:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:35.838 19:28:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.838 19:28:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:35.838 19:28:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.838 19:28:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.838 19:28:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:35.838 19:28:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:35.838 19:28:22 -- accel/accel.sh@41 -- # local IFS=, 00:07:35.838 19:28:22 -- accel/accel.sh@42 -- # jq -r . 00:07:35.838 [2024-12-15 19:28:22.440213] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:35.838 [2024-12-15 19:28:22.440314] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71087 ] 00:07:35.838 [2024-12-15 19:28:22.574370] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:35.838 [2024-12-15 19:28:22.637631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.838 [2024-12-15 19:28:22.637761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.838 [2024-12-15 19:28:22.637892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:35.838 [2024-12-15 19:28:22.637893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.838 19:28:22 -- accel/accel.sh@21 -- # val= 00:07:35.838 19:28:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.838 19:28:22 -- accel/accel.sh@20 -- # IFS=: 00:07:35.838 19:28:22 -- accel/accel.sh@20 -- # read -r var val 00:07:35.838 19:28:22 -- accel/accel.sh@21 -- # val= 00:07:35.838 19:28:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.838 19:28:22 -- accel/accel.sh@20 -- # IFS=: 00:07:35.838 19:28:22 -- accel/accel.sh@20 -- # read -r var val 00:07:35.838 19:28:22 -- accel/accel.sh@21 -- # val= 00:07:35.838 19:28:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.838 19:28:22 -- accel/accel.sh@20 -- # IFS=: 00:07:35.838 19:28:22 -- accel/accel.sh@20 -- # read -r var val 00:07:35.838 19:28:22 -- accel/accel.sh@21 -- # val=0xf 00:07:35.838 19:28:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.838 19:28:22 -- accel/accel.sh@20 -- # IFS=: 00:07:35.838 19:28:22 -- accel/accel.sh@20 -- # read -r var val 00:07:35.838 19:28:22 -- accel/accel.sh@21 -- # val= 00:07:35.838 19:28:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.838 19:28:22 -- accel/accel.sh@20 -- # IFS=: 00:07:35.838 19:28:22 -- accel/accel.sh@20 -- # read -r var val 00:07:35.838 19:28:22 -- accel/accel.sh@21 -- # val= 00:07:35.838 19:28:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.838 19:28:22 -- accel/accel.sh@20 -- # IFS=: 00:07:35.838 19:28:22 -- accel/accel.sh@20 -- # read -r var val 00:07:35.838 19:28:22 -- accel/accel.sh@21 -- # val=decompress 00:07:35.838 19:28:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.838 19:28:22 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:35.838 19:28:22 -- accel/accel.sh@20 -- # IFS=: 00:07:35.838 19:28:22 -- accel/accel.sh@20 -- # read -r var val 00:07:35.838 19:28:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:35.838 19:28:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.838 19:28:22 -- accel/accel.sh@20 -- # IFS=: 00:07:35.838 19:28:22 -- accel/accel.sh@20 -- # read -r var val 00:07:35.838 19:28:22 -- accel/accel.sh@21 -- # val= 00:07:35.838 19:28:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.838 19:28:22 -- accel/accel.sh@20 -- # IFS=: 00:07:35.838 19:28:22 -- accel/accel.sh@20 -- # read -r var val 00:07:35.838 19:28:22 -- accel/accel.sh@21 -- # val=software 00:07:35.838 19:28:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.838 19:28:22 -- accel/accel.sh@23 -- # accel_module=software 00:07:35.838 19:28:22 -- accel/accel.sh@20 -- # IFS=: 00:07:35.839 19:28:22 -- accel/accel.sh@20 -- # read -r var val 00:07:35.839 19:28:22 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:35.839 19:28:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.839 19:28:22 -- accel/accel.sh@20 -- # IFS=: 00:07:35.839 19:28:22 -- accel/accel.sh@20 -- # read -r var val 00:07:35.839 19:28:22 -- accel/accel.sh@21 -- # val=32 00:07:35.839 19:28:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.839 19:28:22 -- accel/accel.sh@20 -- # IFS=: 00:07:35.839 19:28:22 -- accel/accel.sh@20 -- # read -r var val 00:07:35.839 19:28:22 -- accel/accel.sh@21 -- # val=32 00:07:35.839 19:28:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.839 19:28:22 -- accel/accel.sh@20 -- # IFS=: 00:07:35.839 19:28:22 -- accel/accel.sh@20 -- # read -r var val 00:07:35.839 19:28:22 -- accel/accel.sh@21 -- # val=1 00:07:35.839 19:28:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.839 19:28:22 -- accel/accel.sh@20 -- # IFS=: 00:07:35.839 19:28:22 -- accel/accel.sh@20 -- # read -r var val 00:07:35.839 19:28:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:35.839 19:28:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.839 19:28:22 -- accel/accel.sh@20 -- # IFS=: 00:07:35.839 19:28:22 -- accel/accel.sh@20 -- # read -r var val 00:07:35.839 19:28:22 -- accel/accel.sh@21 -- # val=Yes 00:07:35.839 19:28:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.839 19:28:22 -- accel/accel.sh@20 -- # IFS=: 00:07:35.839 19:28:22 -- accel/accel.sh@20 -- # read -r var val 00:07:35.839 19:28:22 -- accel/accel.sh@21 -- # val= 00:07:35.839 19:28:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.839 19:28:22 -- accel/accel.sh@20 -- # IFS=: 00:07:35.839 19:28:22 -- accel/accel.sh@20 -- # read -r var val 00:07:35.839 19:28:22 -- accel/accel.sh@21 -- # val= 00:07:35.839 19:28:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.839 19:28:22 -- accel/accel.sh@20 -- # IFS=: 00:07:35.839 19:28:22 -- accel/accel.sh@20 -- # read -r var val 00:07:37.215 19:28:23 -- accel/accel.sh@21 -- # val= 00:07:37.215 19:28:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.215 19:28:23 -- accel/accel.sh@20 -- # IFS=: 00:07:37.215 19:28:23 -- accel/accel.sh@20 -- # read -r var val 00:07:37.215 19:28:23 -- accel/accel.sh@21 -- # val= 00:07:37.215 19:28:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.215 19:28:23 -- accel/accel.sh@20 -- # IFS=: 00:07:37.215 19:28:23 -- accel/accel.sh@20 -- # read -r var val 00:07:37.215 19:28:23 -- accel/accel.sh@21 -- # val= 00:07:37.215 19:28:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.215 19:28:23 -- accel/accel.sh@20 -- # IFS=: 00:07:37.215 19:28:23 -- accel/accel.sh@20 -- # read -r var val 00:07:37.215 19:28:23 -- accel/accel.sh@21 -- # val= 00:07:37.215 19:28:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.215 19:28:23 -- accel/accel.sh@20 -- # IFS=: 00:07:37.215 19:28:23 -- accel/accel.sh@20 -- # read -r var val 00:07:37.215 19:28:23 -- accel/accel.sh@21 -- # val= 00:07:37.215 19:28:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.215 19:28:23 -- accel/accel.sh@20 -- # IFS=: 00:07:37.215 19:28:23 -- accel/accel.sh@20 -- # read -r var val 00:07:37.215 19:28:23 -- accel/accel.sh@21 -- # val= 00:07:37.215 19:28:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.215 19:28:23 -- accel/accel.sh@20 -- # IFS=: 00:07:37.215 19:28:23 -- accel/accel.sh@20 -- # read -r var val 00:07:37.215 19:28:23 -- accel/accel.sh@21 -- # val= 00:07:37.215 19:28:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.215 19:28:23 -- accel/accel.sh@20 -- # IFS=: 00:07:37.215 19:28:23 -- accel/accel.sh@20 -- # read -r var val 00:07:37.215 19:28:23 -- accel/accel.sh@21 -- # val= 00:07:37.215 19:28:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.215 19:28:23 -- accel/accel.sh@20 -- # IFS=: 00:07:37.215 19:28:23 -- accel/accel.sh@20 -- # read -r var val 00:07:37.215 19:28:23 -- accel/accel.sh@21 -- # val= 00:07:37.215 19:28:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.215 19:28:23 -- accel/accel.sh@20 -- # IFS=: 00:07:37.215 19:28:23 -- accel/accel.sh@20 -- # read -r var val 00:07:37.215 19:28:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:37.215 19:28:23 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:37.215 19:28:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.215 00:07:37.215 real 0m3.038s 00:07:37.215 user 0m9.646s 00:07:37.215 sys 0m0.293s 00:07:37.215 19:28:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:37.215 ************************************ 00:07:37.215 END TEST accel_decomp_mcore 00:07:37.215 ************************************ 00:07:37.215 19:28:23 -- common/autotest_common.sh@10 -- # set +x 00:07:37.215 19:28:23 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:37.215 19:28:23 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:37.215 19:28:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:37.215 19:28:23 -- common/autotest_common.sh@10 -- # set +x 00:07:37.215 ************************************ 00:07:37.215 START TEST accel_decomp_full_mcore 00:07:37.215 ************************************ 00:07:37.215 19:28:24 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:37.215 19:28:24 -- accel/accel.sh@16 -- # local accel_opc 00:07:37.215 19:28:24 -- accel/accel.sh@17 -- # local accel_module 00:07:37.215 19:28:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:37.215 19:28:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:37.215 19:28:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:37.215 19:28:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:37.215 19:28:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.215 19:28:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.215 19:28:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:37.215 19:28:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:37.215 19:28:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:37.215 19:28:24 -- accel/accel.sh@42 -- # jq -r . 00:07:37.215 [2024-12-15 19:28:24.029640] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:37.215 [2024-12-15 19:28:24.029756] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71125 ] 00:07:37.473 [2024-12-15 19:28:24.163901] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:37.473 [2024-12-15 19:28:24.238185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.473 [2024-12-15 19:28:24.238338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.473 [2024-12-15 19:28:24.238395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:37.473 [2024-12-15 19:28:24.238398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.847 19:28:25 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:38.847 00:07:38.847 SPDK Configuration: 00:07:38.847 Core mask: 0xf 00:07:38.847 00:07:38.847 Accel Perf Configuration: 00:07:38.847 Workload Type: decompress 00:07:38.847 Transfer size: 111250 bytes 00:07:38.847 Vector count 1 00:07:38.847 Module: software 00:07:38.847 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:38.847 Queue depth: 32 00:07:38.847 Allocate depth: 32 00:07:38.847 # threads/core: 1 00:07:38.847 Run time: 1 seconds 00:07:38.847 Verify: Yes 00:07:38.847 00:07:38.847 Running for 1 seconds... 00:07:38.847 00:07:38.847 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:38.847 ------------------------------------------------------------------------------------ 00:07:38.847 0,0 5344/s 220 MiB/s 0 0 00:07:38.847 3,0 4576/s 189 MiB/s 0 0 00:07:38.847 2,0 5312/s 219 MiB/s 0 0 00:07:38.847 1,0 4448/s 183 MiB/s 0 0 00:07:38.847 ==================================================================================== 00:07:38.847 Total 19680/s 2087 MiB/s 0 0' 00:07:38.847 19:28:25 -- accel/accel.sh@20 -- # IFS=: 00:07:38.847 19:28:25 -- accel/accel.sh@20 -- # read -r var val 00:07:38.847 19:28:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:38.847 19:28:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:38.847 19:28:25 -- accel/accel.sh@12 -- # build_accel_config 00:07:38.847 19:28:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:38.847 19:28:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.847 19:28:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.847 19:28:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:38.847 19:28:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:38.847 19:28:25 -- accel/accel.sh@41 -- # local IFS=, 00:07:38.847 19:28:25 -- accel/accel.sh@42 -- # jq -r . 00:07:38.847 [2024-12-15 19:28:25.554100] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:38.847 [2024-12-15 19:28:25.554683] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71147 ] 00:07:38.847 [2024-12-15 19:28:25.688059] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:39.106 [2024-12-15 19:28:25.757850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.106 [2024-12-15 19:28:25.757967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.106 [2024-12-15 19:28:25.759001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:39.106 [2024-12-15 19:28:25.759013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.106 19:28:25 -- accel/accel.sh@21 -- # val= 00:07:39.106 19:28:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.106 19:28:25 -- accel/accel.sh@20 -- # IFS=: 00:07:39.106 19:28:25 -- accel/accel.sh@20 -- # read -r var val 00:07:39.106 19:28:25 -- accel/accel.sh@21 -- # val= 00:07:39.106 19:28:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.106 19:28:25 -- accel/accel.sh@20 -- # IFS=: 00:07:39.106 19:28:25 -- accel/accel.sh@20 -- # read -r var val 00:07:39.106 19:28:25 -- accel/accel.sh@21 -- # val= 00:07:39.106 19:28:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.106 19:28:25 -- accel/accel.sh@20 -- # IFS=: 00:07:39.106 19:28:25 -- accel/accel.sh@20 -- # read -r var val 00:07:39.106 19:28:25 -- accel/accel.sh@21 -- # val=0xf 00:07:39.106 19:28:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.106 19:28:25 -- accel/accel.sh@20 -- # IFS=: 00:07:39.106 19:28:25 -- accel/accel.sh@20 -- # read -r var val 00:07:39.106 19:28:25 -- accel/accel.sh@21 -- # val= 00:07:39.106 19:28:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.106 19:28:25 -- accel/accel.sh@20 -- # IFS=: 00:07:39.106 19:28:25 -- accel/accel.sh@20 -- # read -r var val 00:07:39.106 19:28:25 -- accel/accel.sh@21 -- # val= 00:07:39.106 19:28:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.106 19:28:25 -- accel/accel.sh@20 -- # IFS=: 00:07:39.106 19:28:25 -- accel/accel.sh@20 -- # read -r var val 00:07:39.106 19:28:25 -- accel/accel.sh@21 -- # val=decompress 00:07:39.106 19:28:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.106 19:28:25 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:39.106 19:28:25 -- accel/accel.sh@20 -- # IFS=: 00:07:39.106 19:28:25 -- accel/accel.sh@20 -- # read -r var val 00:07:39.106 19:28:25 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:39.106 19:28:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.106 19:28:25 -- accel/accel.sh@20 -- # IFS=: 00:07:39.106 19:28:25 -- accel/accel.sh@20 -- # read -r var val 00:07:39.106 19:28:25 -- accel/accel.sh@21 -- # val= 00:07:39.106 19:28:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.106 19:28:25 -- accel/accel.sh@20 -- # IFS=: 00:07:39.106 19:28:25 -- accel/accel.sh@20 -- # read -r var val 00:07:39.106 19:28:25 -- accel/accel.sh@21 -- # val=software 00:07:39.106 19:28:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.106 19:28:25 -- accel/accel.sh@23 -- # accel_module=software 00:07:39.106 19:28:25 -- accel/accel.sh@20 -- # IFS=: 00:07:39.106 19:28:25 -- accel/accel.sh@20 -- # read -r var val 00:07:39.106 19:28:25 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:39.106 19:28:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.106 19:28:25 -- accel/accel.sh@20 -- # IFS=: 00:07:39.106 19:28:25 -- accel/accel.sh@20 -- # read -r var val 00:07:39.106 19:28:25 -- accel/accel.sh@21 -- # val=32 00:07:39.106 19:28:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.106 19:28:25 -- accel/accel.sh@20 -- # IFS=: 00:07:39.106 19:28:25 -- accel/accel.sh@20 -- # read -r var val 00:07:39.106 19:28:25 -- accel/accel.sh@21 -- # val=32 00:07:39.106 19:28:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.106 19:28:25 -- accel/accel.sh@20 -- # IFS=: 00:07:39.106 19:28:25 -- accel/accel.sh@20 -- # read -r var val 00:07:39.106 19:28:25 -- accel/accel.sh@21 -- # val=1 00:07:39.106 19:28:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.106 19:28:25 -- accel/accel.sh@20 -- # IFS=: 00:07:39.106 19:28:25 -- accel/accel.sh@20 -- # read -r var val 00:07:39.106 19:28:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:39.106 19:28:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.106 19:28:25 -- accel/accel.sh@20 -- # IFS=: 00:07:39.106 19:28:25 -- accel/accel.sh@20 -- # read -r var val 00:07:39.106 19:28:25 -- accel/accel.sh@21 -- # val=Yes 00:07:39.106 19:28:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.106 19:28:25 -- accel/accel.sh@20 -- # IFS=: 00:07:39.106 19:28:25 -- accel/accel.sh@20 -- # read -r var val 00:07:39.106 19:28:25 -- accel/accel.sh@21 -- # val= 00:07:39.106 19:28:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.106 19:28:25 -- accel/accel.sh@20 -- # IFS=: 00:07:39.106 19:28:25 -- accel/accel.sh@20 -- # read -r var val 00:07:39.106 19:28:25 -- accel/accel.sh@21 -- # val= 00:07:39.106 19:28:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.106 19:28:25 -- accel/accel.sh@20 -- # IFS=: 00:07:39.106 19:28:25 -- accel/accel.sh@20 -- # read -r var val 00:07:40.481 19:28:27 -- accel/accel.sh@21 -- # val= 00:07:40.481 19:28:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.481 19:28:27 -- accel/accel.sh@20 -- # IFS=: 00:07:40.481 19:28:27 -- accel/accel.sh@20 -- # read -r var val 00:07:40.481 19:28:27 -- accel/accel.sh@21 -- # val= 00:07:40.481 19:28:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.481 19:28:27 -- accel/accel.sh@20 -- # IFS=: 00:07:40.481 19:28:27 -- accel/accel.sh@20 -- # read -r var val 00:07:40.481 19:28:27 -- accel/accel.sh@21 -- # val= 00:07:40.481 19:28:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.481 19:28:27 -- accel/accel.sh@20 -- # IFS=: 00:07:40.481 19:28:27 -- accel/accel.sh@20 -- # read -r var val 00:07:40.481 19:28:27 -- accel/accel.sh@21 -- # val= 00:07:40.481 19:28:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.481 19:28:27 -- accel/accel.sh@20 -- # IFS=: 00:07:40.481 19:28:27 -- accel/accel.sh@20 -- # read -r var val 00:07:40.481 19:28:27 -- accel/accel.sh@21 -- # val= 00:07:40.481 19:28:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.481 19:28:27 -- accel/accel.sh@20 -- # IFS=: 00:07:40.481 19:28:27 -- accel/accel.sh@20 -- # read -r var val 00:07:40.481 19:28:27 -- accel/accel.sh@21 -- # val= 00:07:40.481 19:28:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.481 19:28:27 -- accel/accel.sh@20 -- # IFS=: 00:07:40.481 19:28:27 -- accel/accel.sh@20 -- # read -r var val 00:07:40.481 19:28:27 -- accel/accel.sh@21 -- # val= 00:07:40.481 19:28:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.481 19:28:27 -- accel/accel.sh@20 -- # IFS=: 00:07:40.481 19:28:27 -- accel/accel.sh@20 -- # read -r var val 00:07:40.481 19:28:27 -- accel/accel.sh@21 -- # val= 00:07:40.481 19:28:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.481 19:28:27 -- accel/accel.sh@20 -- # IFS=: 00:07:40.481 19:28:27 -- accel/accel.sh@20 -- # read -r var val 00:07:40.481 19:28:27 -- accel/accel.sh@21 -- # val= 00:07:40.481 19:28:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.482 19:28:27 -- accel/accel.sh@20 -- # IFS=: 00:07:40.482 19:28:27 -- accel/accel.sh@20 -- # read -r var val 00:07:40.482 19:28:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:40.482 19:28:27 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:40.482 19:28:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.482 00:07:40.482 real 0m3.042s 00:07:40.482 user 0m4.877s 00:07:40.482 sys 0m0.148s 00:07:40.482 19:28:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:40.482 ************************************ 00:07:40.482 19:28:27 -- common/autotest_common.sh@10 -- # set +x 00:07:40.482 END TEST accel_decomp_full_mcore 00:07:40.482 ************************************ 00:07:40.482 19:28:27 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:40.482 19:28:27 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:40.482 19:28:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:40.482 19:28:27 -- common/autotest_common.sh@10 -- # set +x 00:07:40.482 ************************************ 00:07:40.482 START TEST accel_decomp_mthread 00:07:40.482 ************************************ 00:07:40.482 19:28:27 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:40.482 19:28:27 -- accel/accel.sh@16 -- # local accel_opc 00:07:40.482 19:28:27 -- accel/accel.sh@17 -- # local accel_module 00:07:40.482 19:28:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:40.482 19:28:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:40.482 19:28:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:40.482 19:28:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:40.482 19:28:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.482 19:28:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.482 19:28:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:40.482 19:28:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:40.482 19:28:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:40.482 19:28:27 -- accel/accel.sh@42 -- # jq -r . 00:07:40.482 [2024-12-15 19:28:27.123187] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:40.482 [2024-12-15 19:28:27.123286] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71185 ] 00:07:40.482 [2024-12-15 19:28:27.258735] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.482 [2024-12-15 19:28:27.323781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.857 19:28:28 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:41.857 00:07:41.858 SPDK Configuration: 00:07:41.858 Core mask: 0x1 00:07:41.858 00:07:41.858 Accel Perf Configuration: 00:07:41.858 Workload Type: decompress 00:07:41.858 Transfer size: 4096 bytes 00:07:41.858 Vector count 1 00:07:41.858 Module: software 00:07:41.858 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:41.858 Queue depth: 32 00:07:41.858 Allocate depth: 32 00:07:41.858 # threads/core: 2 00:07:41.858 Run time: 1 seconds 00:07:41.858 Verify: Yes 00:07:41.858 00:07:41.858 Running for 1 seconds... 00:07:41.858 00:07:41.858 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:41.858 ------------------------------------------------------------------------------------ 00:07:41.858 0,1 41952/s 77 MiB/s 0 0 00:07:41.858 0,0 41792/s 77 MiB/s 0 0 00:07:41.858 ==================================================================================== 00:07:41.858 Total 83744/s 327 MiB/s 0 0' 00:07:41.858 19:28:28 -- accel/accel.sh@20 -- # IFS=: 00:07:41.858 19:28:28 -- accel/accel.sh@20 -- # read -r var val 00:07:41.858 19:28:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:41.858 19:28:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:41.858 19:28:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:41.858 19:28:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:41.858 19:28:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.858 19:28:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.858 19:28:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:41.858 19:28:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:41.858 19:28:28 -- accel/accel.sh@41 -- # local IFS=, 00:07:41.858 19:28:28 -- accel/accel.sh@42 -- # jq -r . 00:07:41.858 [2024-12-15 19:28:28.637494] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:41.858 [2024-12-15 19:28:28.637588] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71204 ] 00:07:42.116 [2024-12-15 19:28:28.773867] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.116 [2024-12-15 19:28:28.835314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.116 19:28:28 -- accel/accel.sh@21 -- # val= 00:07:42.116 19:28:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.116 19:28:28 -- accel/accel.sh@20 -- # IFS=: 00:07:42.116 19:28:28 -- accel/accel.sh@20 -- # read -r var val 00:07:42.116 19:28:28 -- accel/accel.sh@21 -- # val= 00:07:42.116 19:28:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.116 19:28:28 -- accel/accel.sh@20 -- # IFS=: 00:07:42.116 19:28:28 -- accel/accel.sh@20 -- # read -r var val 00:07:42.116 19:28:28 -- accel/accel.sh@21 -- # val= 00:07:42.116 19:28:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.116 19:28:28 -- accel/accel.sh@20 -- # IFS=: 00:07:42.116 19:28:28 -- accel/accel.sh@20 -- # read -r var val 00:07:42.116 19:28:28 -- accel/accel.sh@21 -- # val=0x1 00:07:42.116 19:28:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.116 19:28:28 -- accel/accel.sh@20 -- # IFS=: 00:07:42.116 19:28:28 -- accel/accel.sh@20 -- # read -r var val 00:07:42.116 19:28:28 -- accel/accel.sh@21 -- # val= 00:07:42.116 19:28:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.116 19:28:28 -- accel/accel.sh@20 -- # IFS=: 00:07:42.116 19:28:28 -- accel/accel.sh@20 -- # read -r var val 00:07:42.116 19:28:28 -- accel/accel.sh@21 -- # val= 00:07:42.116 19:28:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.116 19:28:28 -- accel/accel.sh@20 -- # IFS=: 00:07:42.116 19:28:28 -- accel/accel.sh@20 -- # read -r var val 00:07:42.116 19:28:28 -- accel/accel.sh@21 -- # val=decompress 00:07:42.116 19:28:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.116 19:28:28 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:42.116 19:28:28 -- accel/accel.sh@20 -- # IFS=: 00:07:42.116 19:28:28 -- accel/accel.sh@20 -- # read -r var val 00:07:42.116 19:28:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:42.116 19:28:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.116 19:28:28 -- accel/accel.sh@20 -- # IFS=: 00:07:42.116 19:28:28 -- accel/accel.sh@20 -- # read -r var val 00:07:42.116 19:28:28 -- accel/accel.sh@21 -- # val= 00:07:42.116 19:28:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.116 19:28:28 -- accel/accel.sh@20 -- # IFS=: 00:07:42.116 19:28:28 -- accel/accel.sh@20 -- # read -r var val 00:07:42.116 19:28:28 -- accel/accel.sh@21 -- # val=software 00:07:42.116 19:28:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.116 19:28:28 -- accel/accel.sh@23 -- # accel_module=software 00:07:42.116 19:28:28 -- accel/accel.sh@20 -- # IFS=: 00:07:42.116 19:28:28 -- accel/accel.sh@20 -- # read -r var val 00:07:42.116 19:28:28 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:42.116 19:28:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.116 19:28:28 -- accel/accel.sh@20 -- # IFS=: 00:07:42.116 19:28:28 -- accel/accel.sh@20 -- # read -r var val 00:07:42.116 19:28:28 -- accel/accel.sh@21 -- # val=32 00:07:42.116 19:28:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.116 19:28:28 -- accel/accel.sh@20 -- # IFS=: 00:07:42.116 19:28:28 -- accel/accel.sh@20 -- # read -r var val 00:07:42.116 19:28:28 -- accel/accel.sh@21 -- # val=32 00:07:42.116 19:28:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.116 19:28:28 -- accel/accel.sh@20 -- # IFS=: 00:07:42.116 19:28:28 -- accel/accel.sh@20 -- # read -r var val 00:07:42.116 19:28:28 -- accel/accel.sh@21 -- # val=2 00:07:42.116 19:28:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.116 19:28:28 -- accel/accel.sh@20 -- # IFS=: 00:07:42.116 19:28:28 -- accel/accel.sh@20 -- # read -r var val 00:07:42.116 19:28:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:42.116 19:28:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.116 19:28:28 -- accel/accel.sh@20 -- # IFS=: 00:07:42.116 19:28:28 -- accel/accel.sh@20 -- # read -r var val 00:07:42.116 19:28:28 -- accel/accel.sh@21 -- # val=Yes 00:07:42.116 19:28:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.116 19:28:28 -- accel/accel.sh@20 -- # IFS=: 00:07:42.116 19:28:28 -- accel/accel.sh@20 -- # read -r var val 00:07:42.116 19:28:28 -- accel/accel.sh@21 -- # val= 00:07:42.116 19:28:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.116 19:28:28 -- accel/accel.sh@20 -- # IFS=: 00:07:42.116 19:28:28 -- accel/accel.sh@20 -- # read -r var val 00:07:42.116 19:28:28 -- accel/accel.sh@21 -- # val= 00:07:42.117 19:28:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.117 19:28:28 -- accel/accel.sh@20 -- # IFS=: 00:07:42.117 19:28:28 -- accel/accel.sh@20 -- # read -r var val 00:07:43.490 19:28:30 -- accel/accel.sh@21 -- # val= 00:07:43.490 19:28:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.490 19:28:30 -- accel/accel.sh@20 -- # IFS=: 00:07:43.490 19:28:30 -- accel/accel.sh@20 -- # read -r var val 00:07:43.490 19:28:30 -- accel/accel.sh@21 -- # val= 00:07:43.490 19:28:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.490 19:28:30 -- accel/accel.sh@20 -- # IFS=: 00:07:43.490 19:28:30 -- accel/accel.sh@20 -- # read -r var val 00:07:43.490 19:28:30 -- accel/accel.sh@21 -- # val= 00:07:43.490 19:28:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.490 19:28:30 -- accel/accel.sh@20 -- # IFS=: 00:07:43.490 19:28:30 -- accel/accel.sh@20 -- # read -r var val 00:07:43.490 19:28:30 -- accel/accel.sh@21 -- # val= 00:07:43.491 19:28:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.491 19:28:30 -- accel/accel.sh@20 -- # IFS=: 00:07:43.491 19:28:30 -- accel/accel.sh@20 -- # read -r var val 00:07:43.491 19:28:30 -- accel/accel.sh@21 -- # val= 00:07:43.491 19:28:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.491 19:28:30 -- accel/accel.sh@20 -- # IFS=: 00:07:43.491 19:28:30 -- accel/accel.sh@20 -- # read -r var val 00:07:43.491 19:28:30 -- accel/accel.sh@21 -- # val= 00:07:43.491 19:28:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.491 19:28:30 -- accel/accel.sh@20 -- # IFS=: 00:07:43.491 19:28:30 -- accel/accel.sh@20 -- # read -r var val 00:07:43.491 19:28:30 -- accel/accel.sh@21 -- # val= 00:07:43.491 19:28:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.491 19:28:30 -- accel/accel.sh@20 -- # IFS=: 00:07:43.491 19:28:30 -- accel/accel.sh@20 -- # read -r var val 00:07:43.491 19:28:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:43.491 19:28:30 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:43.491 19:28:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.491 00:07:43.491 real 0m3.009s 00:07:43.491 user 0m2.528s 00:07:43.491 sys 0m0.273s 00:07:43.491 19:28:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:43.491 19:28:30 -- common/autotest_common.sh@10 -- # set +x 00:07:43.491 ************************************ 00:07:43.491 END TEST accel_decomp_mthread 00:07:43.491 ************************************ 00:07:43.491 19:28:30 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:43.491 19:28:30 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:43.491 19:28:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:43.491 19:28:30 -- common/autotest_common.sh@10 -- # set +x 00:07:43.491 ************************************ 00:07:43.491 START TEST accel_deomp_full_mthread 00:07:43.491 ************************************ 00:07:43.491 19:28:30 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:43.491 19:28:30 -- accel/accel.sh@16 -- # local accel_opc 00:07:43.491 19:28:30 -- accel/accel.sh@17 -- # local accel_module 00:07:43.491 19:28:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:43.491 19:28:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:43.491 19:28:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:43.491 19:28:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:43.491 19:28:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.491 19:28:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.491 19:28:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:43.491 19:28:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:43.491 19:28:30 -- accel/accel.sh@41 -- # local IFS=, 00:07:43.491 19:28:30 -- accel/accel.sh@42 -- # jq -r . 00:07:43.491 [2024-12-15 19:28:30.190461] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:43.491 [2024-12-15 19:28:30.190560] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71239 ] 00:07:43.491 [2024-12-15 19:28:30.326207] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.749 [2024-12-15 19:28:30.388062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.125 19:28:31 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:45.125 00:07:45.125 SPDK Configuration: 00:07:45.125 Core mask: 0x1 00:07:45.125 00:07:45.125 Accel Perf Configuration: 00:07:45.125 Workload Type: decompress 00:07:45.125 Transfer size: 111250 bytes 00:07:45.125 Vector count 1 00:07:45.125 Module: software 00:07:45.125 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:45.125 Queue depth: 32 00:07:45.125 Allocate depth: 32 00:07:45.125 # threads/core: 2 00:07:45.125 Run time: 1 seconds 00:07:45.125 Verify: Yes 00:07:45.125 00:07:45.125 Running for 1 seconds... 00:07:45.125 00:07:45.125 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:45.125 ------------------------------------------------------------------------------------ 00:07:45.125 0,1 2880/s 118 MiB/s 0 0 00:07:45.125 0,0 2880/s 118 MiB/s 0 0 00:07:45.125 ==================================================================================== 00:07:45.125 Total 5760/s 611 MiB/s 0 0' 00:07:45.125 19:28:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:45.125 19:28:31 -- accel/accel.sh@20 -- # IFS=: 00:07:45.125 19:28:31 -- accel/accel.sh@20 -- # read -r var val 00:07:45.125 19:28:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:45.125 19:28:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:45.125 19:28:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:45.125 19:28:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.125 19:28:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.125 19:28:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:45.125 19:28:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:45.125 19:28:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:45.125 19:28:31 -- accel/accel.sh@42 -- # jq -r . 00:07:45.125 [2024-12-15 19:28:31.687573] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:45.125 [2024-12-15 19:28:31.687656] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71258 ] 00:07:45.125 [2024-12-15 19:28:31.815000] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.125 [2024-12-15 19:28:31.870986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.125 19:28:31 -- accel/accel.sh@21 -- # val= 00:07:45.125 19:28:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.125 19:28:31 -- accel/accel.sh@20 -- # IFS=: 00:07:45.125 19:28:31 -- accel/accel.sh@20 -- # read -r var val 00:07:45.125 19:28:31 -- accel/accel.sh@21 -- # val= 00:07:45.125 19:28:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.125 19:28:31 -- accel/accel.sh@20 -- # IFS=: 00:07:45.125 19:28:31 -- accel/accel.sh@20 -- # read -r var val 00:07:45.125 19:28:31 -- accel/accel.sh@21 -- # val= 00:07:45.125 19:28:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.125 19:28:31 -- accel/accel.sh@20 -- # IFS=: 00:07:45.125 19:28:31 -- accel/accel.sh@20 -- # read -r var val 00:07:45.125 19:28:31 -- accel/accel.sh@21 -- # val=0x1 00:07:45.125 19:28:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.125 19:28:31 -- accel/accel.sh@20 -- # IFS=: 00:07:45.125 19:28:31 -- accel/accel.sh@20 -- # read -r var val 00:07:45.125 19:28:31 -- accel/accel.sh@21 -- # val= 00:07:45.125 19:28:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.125 19:28:31 -- accel/accel.sh@20 -- # IFS=: 00:07:45.125 19:28:31 -- accel/accel.sh@20 -- # read -r var val 00:07:45.125 19:28:31 -- accel/accel.sh@21 -- # val= 00:07:45.125 19:28:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.125 19:28:31 -- accel/accel.sh@20 -- # IFS=: 00:07:45.125 19:28:31 -- accel/accel.sh@20 -- # read -r var val 00:07:45.125 19:28:31 -- accel/accel.sh@21 -- # val=decompress 00:07:45.125 19:28:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.125 19:28:31 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:45.125 19:28:31 -- accel/accel.sh@20 -- # IFS=: 00:07:45.125 19:28:31 -- accel/accel.sh@20 -- # read -r var val 00:07:45.125 19:28:31 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:45.125 19:28:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.125 19:28:31 -- accel/accel.sh@20 -- # IFS=: 00:07:45.125 19:28:31 -- accel/accel.sh@20 -- # read -r var val 00:07:45.125 19:28:31 -- accel/accel.sh@21 -- # val= 00:07:45.125 19:28:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.125 19:28:31 -- accel/accel.sh@20 -- # IFS=: 00:07:45.125 19:28:31 -- accel/accel.sh@20 -- # read -r var val 00:07:45.125 19:28:31 -- accel/accel.sh@21 -- # val=software 00:07:45.125 19:28:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.125 19:28:31 -- accel/accel.sh@23 -- # accel_module=software 00:07:45.125 19:28:31 -- accel/accel.sh@20 -- # IFS=: 00:07:45.125 19:28:31 -- accel/accel.sh@20 -- # read -r var val 00:07:45.125 19:28:31 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:45.125 19:28:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.125 19:28:31 -- accel/accel.sh@20 -- # IFS=: 00:07:45.125 19:28:31 -- accel/accel.sh@20 -- # read -r var val 00:07:45.125 19:28:31 -- accel/accel.sh@21 -- # val=32 00:07:45.125 19:28:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.125 19:28:31 -- accel/accel.sh@20 -- # IFS=: 00:07:45.125 19:28:31 -- accel/accel.sh@20 -- # read -r var val 00:07:45.125 19:28:31 -- accel/accel.sh@21 -- # val=32 00:07:45.125 19:28:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.125 19:28:31 -- accel/accel.sh@20 -- # IFS=: 00:07:45.125 19:28:31 -- accel/accel.sh@20 -- # read -r var val 00:07:45.125 19:28:31 -- accel/accel.sh@21 -- # val=2 00:07:45.125 19:28:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.125 19:28:31 -- accel/accel.sh@20 -- # IFS=: 00:07:45.125 19:28:31 -- accel/accel.sh@20 -- # read -r var val 00:07:45.125 19:28:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:45.125 19:28:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.125 19:28:31 -- accel/accel.sh@20 -- # IFS=: 00:07:45.125 19:28:31 -- accel/accel.sh@20 -- # read -r var val 00:07:45.125 19:28:31 -- accel/accel.sh@21 -- # val=Yes 00:07:45.125 19:28:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.125 19:28:31 -- accel/accel.sh@20 -- # IFS=: 00:07:45.125 19:28:31 -- accel/accel.sh@20 -- # read -r var val 00:07:45.125 19:28:31 -- accel/accel.sh@21 -- # val= 00:07:45.125 19:28:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.125 19:28:31 -- accel/accel.sh@20 -- # IFS=: 00:07:45.125 19:28:31 -- accel/accel.sh@20 -- # read -r var val 00:07:45.125 19:28:31 -- accel/accel.sh@21 -- # val= 00:07:45.125 19:28:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.125 19:28:31 -- accel/accel.sh@20 -- # IFS=: 00:07:45.125 19:28:31 -- accel/accel.sh@20 -- # read -r var val 00:07:46.501 19:28:33 -- accel/accel.sh@21 -- # val= 00:07:46.501 19:28:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.501 19:28:33 -- accel/accel.sh@20 -- # IFS=: 00:07:46.501 19:28:33 -- accel/accel.sh@20 -- # read -r var val 00:07:46.501 19:28:33 -- accel/accel.sh@21 -- # val= 00:07:46.501 19:28:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.501 19:28:33 -- accel/accel.sh@20 -- # IFS=: 00:07:46.501 19:28:33 -- accel/accel.sh@20 -- # read -r var val 00:07:46.501 19:28:33 -- accel/accel.sh@21 -- # val= 00:07:46.501 19:28:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.501 19:28:33 -- accel/accel.sh@20 -- # IFS=: 00:07:46.501 19:28:33 -- accel/accel.sh@20 -- # read -r var val 00:07:46.501 19:28:33 -- accel/accel.sh@21 -- # val= 00:07:46.501 19:28:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.501 19:28:33 -- accel/accel.sh@20 -- # IFS=: 00:07:46.501 19:28:33 -- accel/accel.sh@20 -- # read -r var val 00:07:46.501 19:28:33 -- accel/accel.sh@21 -- # val= 00:07:46.501 19:28:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.501 19:28:33 -- accel/accel.sh@20 -- # IFS=: 00:07:46.501 19:28:33 -- accel/accel.sh@20 -- # read -r var val 00:07:46.501 19:28:33 -- accel/accel.sh@21 -- # val= 00:07:46.501 19:28:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.501 19:28:33 -- accel/accel.sh@20 -- # IFS=: 00:07:46.501 19:28:33 -- accel/accel.sh@20 -- # read -r var val 00:07:46.501 19:28:33 -- accel/accel.sh@21 -- # val= 00:07:46.501 19:28:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.501 19:28:33 -- accel/accel.sh@20 -- # IFS=: 00:07:46.501 19:28:33 -- accel/accel.sh@20 -- # read -r var val 00:07:46.501 19:28:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:46.501 19:28:33 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:46.501 19:28:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:46.501 00:07:46.501 real 0m3.016s 00:07:46.501 user 0m2.548s 00:07:46.501 sys 0m0.262s 00:07:46.501 19:28:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:46.501 19:28:33 -- common/autotest_common.sh@10 -- # set +x 00:07:46.501 ************************************ 00:07:46.501 END TEST accel_deomp_full_mthread 00:07:46.501 ************************************ 00:07:46.501 19:28:33 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:46.501 19:28:33 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:46.501 19:28:33 -- accel/accel.sh@129 -- # build_accel_config 00:07:46.501 19:28:33 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:46.501 19:28:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:46.501 19:28:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:46.501 19:28:33 -- common/autotest_common.sh@10 -- # set +x 00:07:46.501 19:28:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.501 19:28:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.501 19:28:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:46.501 19:28:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:46.501 19:28:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:46.501 19:28:33 -- accel/accel.sh@42 -- # jq -r . 00:07:46.501 ************************************ 00:07:46.501 START TEST accel_dif_functional_tests 00:07:46.501 ************************************ 00:07:46.501 19:28:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:46.501 [2024-12-15 19:28:33.291136] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:46.501 [2024-12-15 19:28:33.291398] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71294 ] 00:07:46.760 [2024-12-15 19:28:33.426440] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:46.760 [2024-12-15 19:28:33.484608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.760 [2024-12-15 19:28:33.484773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:46.760 [2024-12-15 19:28:33.484774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.760 00:07:46.760 00:07:46.760 CUnit - A unit testing framework for C - Version 2.1-3 00:07:46.760 http://cunit.sourceforge.net/ 00:07:46.760 00:07:46.760 00:07:46.760 Suite: accel_dif 00:07:46.760 Test: verify: DIF generated, GUARD check ...passed 00:07:46.760 Test: verify: DIF generated, APPTAG check ...passed 00:07:46.760 Test: verify: DIF generated, REFTAG check ...passed 00:07:46.760 Test: verify: DIF not generated, GUARD check ...[2024-12-15 19:28:33.596530] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:46.760 passed 00:07:46.760 Test: verify: DIF not generated, APPTAG check ...[2024-12-15 19:28:33.596679] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:46.760 passed 00:07:46.760 Test: verify: DIF not generated, REFTAG check ...[2024-12-15 19:28:33.596749] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:46.760 [2024-12-15 19:28:33.596838] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:46.760 [2024-12-15 19:28:33.596876] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:46.760 passed 00:07:46.760 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:46.760 Test: verify: APPTAG incorrect, APPTAG check ...passed[2024-12-15 19:28:33.596985] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:46.760 [2024-12-15 19:28:33.597060] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:46.760 00:07:46.760 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:46.760 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:46.760 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:46.760 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:46.760 Test: generate copy: DIF generated, GUARD check ...[2024-12-15 19:28:33.597314] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:46.760 passed 00:07:46.760 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:46.760 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:46.760 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:46.760 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:46.760 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:46.760 Test: generate copy: iovecs-len validate ...[2024-12-15 19:28:33.597974] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:46.760 passed 00:07:46.760 Test: generate copy: buffer alignment validate ...passed 00:07:46.760 00:07:46.760 Run Summary: Type Total Ran Passed Failed Inactive 00:07:46.760 suites 1 1 n/a 0 0 00:07:46.760 tests 20 20 20 0 0 00:07:46.760 asserts 204 204 204 0 n/a 00:07:46.760 00:07:46.760 Elapsed time = 0.005 seconds 00:07:47.018 00:07:47.018 real 0m0.623s 00:07:47.018 user 0m0.914s 00:07:47.018 sys 0m0.180s 00:07:47.018 ************************************ 00:07:47.018 END TEST accel_dif_functional_tests 00:07:47.018 ************************************ 00:07:47.018 19:28:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:47.018 19:28:33 -- common/autotest_common.sh@10 -- # set +x 00:07:47.018 00:07:47.018 real 1m5.028s 00:07:47.018 user 1m8.849s 00:07:47.018 sys 0m7.178s 00:07:47.018 19:28:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:47.018 ************************************ 00:07:47.018 END TEST accel 00:07:47.018 ************************************ 00:07:47.018 19:28:33 -- common/autotest_common.sh@10 -- # set +x 00:07:47.277 19:28:33 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:47.277 19:28:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:47.277 19:28:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:47.277 19:28:33 -- common/autotest_common.sh@10 -- # set +x 00:07:47.277 ************************************ 00:07:47.277 START TEST accel_rpc 00:07:47.277 ************************************ 00:07:47.277 19:28:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:47.277 * Looking for test storage... 00:07:47.277 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:47.277 19:28:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:47.277 19:28:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:47.277 19:28:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:47.277 19:28:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:47.277 19:28:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:47.277 19:28:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:47.277 19:28:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:47.277 19:28:34 -- scripts/common.sh@335 -- # IFS=.-: 00:07:47.277 19:28:34 -- scripts/common.sh@335 -- # read -ra ver1 00:07:47.277 19:28:34 -- scripts/common.sh@336 -- # IFS=.-: 00:07:47.277 19:28:34 -- scripts/common.sh@336 -- # read -ra ver2 00:07:47.277 19:28:34 -- scripts/common.sh@337 -- # local 'op=<' 00:07:47.277 19:28:34 -- scripts/common.sh@339 -- # ver1_l=2 00:07:47.277 19:28:34 -- scripts/common.sh@340 -- # ver2_l=1 00:07:47.277 19:28:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:47.277 19:28:34 -- scripts/common.sh@343 -- # case "$op" in 00:07:47.277 19:28:34 -- scripts/common.sh@344 -- # : 1 00:07:47.277 19:28:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:47.277 19:28:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:47.277 19:28:34 -- scripts/common.sh@364 -- # decimal 1 00:07:47.277 19:28:34 -- scripts/common.sh@352 -- # local d=1 00:07:47.277 19:28:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:47.277 19:28:34 -- scripts/common.sh@354 -- # echo 1 00:07:47.277 19:28:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:47.277 19:28:34 -- scripts/common.sh@365 -- # decimal 2 00:07:47.277 19:28:34 -- scripts/common.sh@352 -- # local d=2 00:07:47.277 19:28:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:47.277 19:28:34 -- scripts/common.sh@354 -- # echo 2 00:07:47.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.277 19:28:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:47.277 19:28:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:47.277 19:28:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:47.277 19:28:34 -- scripts/common.sh@367 -- # return 0 00:07:47.277 19:28:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:47.277 19:28:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:47.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.277 --rc genhtml_branch_coverage=1 00:07:47.277 --rc genhtml_function_coverage=1 00:07:47.277 --rc genhtml_legend=1 00:07:47.277 --rc geninfo_all_blocks=1 00:07:47.277 --rc geninfo_unexecuted_blocks=1 00:07:47.277 00:07:47.277 ' 00:07:47.277 19:28:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:47.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.277 --rc genhtml_branch_coverage=1 00:07:47.277 --rc genhtml_function_coverage=1 00:07:47.277 --rc genhtml_legend=1 00:07:47.277 --rc geninfo_all_blocks=1 00:07:47.277 --rc geninfo_unexecuted_blocks=1 00:07:47.277 00:07:47.277 ' 00:07:47.277 19:28:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:47.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.277 --rc genhtml_branch_coverage=1 00:07:47.277 --rc genhtml_function_coverage=1 00:07:47.277 --rc genhtml_legend=1 00:07:47.277 --rc geninfo_all_blocks=1 00:07:47.277 --rc geninfo_unexecuted_blocks=1 00:07:47.277 00:07:47.277 ' 00:07:47.277 19:28:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:47.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.277 --rc genhtml_branch_coverage=1 00:07:47.277 --rc genhtml_function_coverage=1 00:07:47.277 --rc genhtml_legend=1 00:07:47.277 --rc geninfo_all_blocks=1 00:07:47.277 --rc geninfo_unexecuted_blocks=1 00:07:47.277 00:07:47.277 ' 00:07:47.277 19:28:34 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:47.277 19:28:34 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=71371 00:07:47.277 19:28:34 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:47.277 19:28:34 -- accel/accel_rpc.sh@15 -- # waitforlisten 71371 00:07:47.277 19:28:34 -- common/autotest_common.sh@829 -- # '[' -z 71371 ']' 00:07:47.277 19:28:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.277 19:28:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:47.277 19:28:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.277 19:28:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:47.277 19:28:34 -- common/autotest_common.sh@10 -- # set +x 00:07:47.277 [2024-12-15 19:28:34.167669] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:47.277 [2024-12-15 19:28:34.167957] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71371 ] 00:07:47.536 [2024-12-15 19:28:34.298624] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.536 [2024-12-15 19:28:34.357804] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:47.536 [2024-12-15 19:28:34.358299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.536 19:28:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:47.536 19:28:34 -- common/autotest_common.sh@862 -- # return 0 00:07:47.536 19:28:34 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:47.536 19:28:34 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:47.536 19:28:34 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:47.536 19:28:34 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:47.536 19:28:34 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:47.536 19:28:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:47.536 19:28:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:47.536 19:28:34 -- common/autotest_common.sh@10 -- # set +x 00:07:47.795 ************************************ 00:07:47.795 START TEST accel_assign_opcode 00:07:47.795 ************************************ 00:07:47.795 19:28:34 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:07:47.795 19:28:34 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:47.795 19:28:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.795 19:28:34 -- common/autotest_common.sh@10 -- # set +x 00:07:47.795 [2024-12-15 19:28:34.450880] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:47.795 19:28:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.795 19:28:34 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:47.795 19:28:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.795 19:28:34 -- common/autotest_common.sh@10 -- # set +x 00:07:47.795 [2024-12-15 19:28:34.458850] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:47.795 19:28:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.795 19:28:34 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:47.795 19:28:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.795 19:28:34 -- common/autotest_common.sh@10 -- # set +x 00:07:48.054 19:28:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.054 19:28:34 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:48.054 19:28:34 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:48.054 19:28:34 -- accel/accel_rpc.sh@42 -- # grep software 00:07:48.054 19:28:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.054 19:28:34 -- common/autotest_common.sh@10 -- # set +x 00:07:48.054 19:28:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.054 software 00:07:48.054 ************************************ 00:07:48.054 END TEST accel_assign_opcode 00:07:48.054 ************************************ 00:07:48.054 00:07:48.054 real 0m0.348s 00:07:48.054 user 0m0.052s 00:07:48.054 sys 0m0.014s 00:07:48.054 19:28:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:48.054 19:28:34 -- common/autotest_common.sh@10 -- # set +x 00:07:48.054 19:28:34 -- accel/accel_rpc.sh@55 -- # killprocess 71371 00:07:48.054 19:28:34 -- common/autotest_common.sh@936 -- # '[' -z 71371 ']' 00:07:48.054 19:28:34 -- common/autotest_common.sh@940 -- # kill -0 71371 00:07:48.054 19:28:34 -- common/autotest_common.sh@941 -- # uname 00:07:48.054 19:28:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:48.054 19:28:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71371 00:07:48.054 killing process with pid 71371 00:07:48.054 19:28:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:48.054 19:28:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:48.054 19:28:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71371' 00:07:48.054 19:28:34 -- common/autotest_common.sh@955 -- # kill 71371 00:07:48.054 19:28:34 -- common/autotest_common.sh@960 -- # wait 71371 00:07:48.622 00:07:48.622 real 0m1.424s 00:07:48.622 user 0m1.285s 00:07:48.622 sys 0m0.481s 00:07:48.622 ************************************ 00:07:48.622 END TEST accel_rpc 00:07:48.622 ************************************ 00:07:48.622 19:28:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:48.622 19:28:35 -- common/autotest_common.sh@10 -- # set +x 00:07:48.622 19:28:35 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:48.622 19:28:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:48.622 19:28:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:48.622 19:28:35 -- common/autotest_common.sh@10 -- # set +x 00:07:48.622 ************************************ 00:07:48.622 START TEST app_cmdline 00:07:48.622 ************************************ 00:07:48.622 19:28:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:48.622 * Looking for test storage... 00:07:48.622 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:48.622 19:28:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:48.622 19:28:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:48.622 19:28:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:48.880 19:28:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:48.880 19:28:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:48.880 19:28:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:48.880 19:28:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:48.880 19:28:35 -- scripts/common.sh@335 -- # IFS=.-: 00:07:48.880 19:28:35 -- scripts/common.sh@335 -- # read -ra ver1 00:07:48.880 19:28:35 -- scripts/common.sh@336 -- # IFS=.-: 00:07:48.880 19:28:35 -- scripts/common.sh@336 -- # read -ra ver2 00:07:48.880 19:28:35 -- scripts/common.sh@337 -- # local 'op=<' 00:07:48.880 19:28:35 -- scripts/common.sh@339 -- # ver1_l=2 00:07:48.880 19:28:35 -- scripts/common.sh@340 -- # ver2_l=1 00:07:48.880 19:28:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:48.880 19:28:35 -- scripts/common.sh@343 -- # case "$op" in 00:07:48.880 19:28:35 -- scripts/common.sh@344 -- # : 1 00:07:48.880 19:28:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:48.880 19:28:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:48.880 19:28:35 -- scripts/common.sh@364 -- # decimal 1 00:07:48.880 19:28:35 -- scripts/common.sh@352 -- # local d=1 00:07:48.880 19:28:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:48.880 19:28:35 -- scripts/common.sh@354 -- # echo 1 00:07:48.880 19:28:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:48.880 19:28:35 -- scripts/common.sh@365 -- # decimal 2 00:07:48.880 19:28:35 -- scripts/common.sh@352 -- # local d=2 00:07:48.880 19:28:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:48.880 19:28:35 -- scripts/common.sh@354 -- # echo 2 00:07:48.880 19:28:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:48.880 19:28:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:48.880 19:28:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:48.880 19:28:35 -- scripts/common.sh@367 -- # return 0 00:07:48.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.880 19:28:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:48.880 19:28:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:48.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.880 --rc genhtml_branch_coverage=1 00:07:48.880 --rc genhtml_function_coverage=1 00:07:48.880 --rc genhtml_legend=1 00:07:48.880 --rc geninfo_all_blocks=1 00:07:48.880 --rc geninfo_unexecuted_blocks=1 00:07:48.880 00:07:48.880 ' 00:07:48.880 19:28:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:48.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.880 --rc genhtml_branch_coverage=1 00:07:48.880 --rc genhtml_function_coverage=1 00:07:48.880 --rc genhtml_legend=1 00:07:48.880 --rc geninfo_all_blocks=1 00:07:48.880 --rc geninfo_unexecuted_blocks=1 00:07:48.880 00:07:48.880 ' 00:07:48.880 19:28:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:48.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.880 --rc genhtml_branch_coverage=1 00:07:48.880 --rc genhtml_function_coverage=1 00:07:48.880 --rc genhtml_legend=1 00:07:48.880 --rc geninfo_all_blocks=1 00:07:48.880 --rc geninfo_unexecuted_blocks=1 00:07:48.880 00:07:48.880 ' 00:07:48.880 19:28:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:48.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.880 --rc genhtml_branch_coverage=1 00:07:48.880 --rc genhtml_function_coverage=1 00:07:48.880 --rc genhtml_legend=1 00:07:48.880 --rc geninfo_all_blocks=1 00:07:48.880 --rc geninfo_unexecuted_blocks=1 00:07:48.880 00:07:48.880 ' 00:07:48.880 19:28:35 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:48.880 19:28:35 -- app/cmdline.sh@17 -- # spdk_tgt_pid=71470 00:07:48.880 19:28:35 -- app/cmdline.sh@18 -- # waitforlisten 71470 00:07:48.880 19:28:35 -- common/autotest_common.sh@829 -- # '[' -z 71470 ']' 00:07:48.880 19:28:35 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:48.880 19:28:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.881 19:28:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:48.881 19:28:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.881 19:28:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:48.881 19:28:35 -- common/autotest_common.sh@10 -- # set +x 00:07:48.881 [2024-12-15 19:28:35.667848] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:48.881 [2024-12-15 19:28:35.668125] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71470 ] 00:07:49.140 [2024-12-15 19:28:35.805934] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.140 [2024-12-15 19:28:35.871840] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:49.140 [2024-12-15 19:28:35.872312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.076 19:28:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:50.076 19:28:36 -- common/autotest_common.sh@862 -- # return 0 00:07:50.076 19:28:36 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:50.076 { 00:07:50.076 "fields": { 00:07:50.076 "commit": "c13c99a5e", 00:07:50.076 "major": 24, 00:07:50.076 "minor": 1, 00:07:50.076 "patch": 1, 00:07:50.076 "suffix": "-pre" 00:07:50.076 }, 00:07:50.076 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e" 00:07:50.076 } 00:07:50.076 19:28:36 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:50.076 19:28:36 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:50.076 19:28:36 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:50.076 19:28:36 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:50.076 19:28:36 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:50.076 19:28:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.076 19:28:36 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:50.076 19:28:36 -- common/autotest_common.sh@10 -- # set +x 00:07:50.076 19:28:36 -- app/cmdline.sh@26 -- # sort 00:07:50.076 19:28:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.334 19:28:36 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:50.335 19:28:36 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:50.335 19:28:36 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:50.335 19:28:36 -- common/autotest_common.sh@650 -- # local es=0 00:07:50.335 19:28:36 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:50.335 19:28:36 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:50.335 19:28:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:50.335 19:28:36 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:50.335 19:28:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:50.335 19:28:36 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:50.335 19:28:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:50.335 19:28:36 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:50.335 19:28:37 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:50.335 19:28:37 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:50.593 2024/12/15 19:28:37 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:07:50.593 request: 00:07:50.593 { 00:07:50.593 "method": "env_dpdk_get_mem_stats", 00:07:50.593 "params": {} 00:07:50.593 } 00:07:50.593 Got JSON-RPC error response 00:07:50.593 GoRPCClient: error on JSON-RPC call 00:07:50.593 19:28:37 -- common/autotest_common.sh@653 -- # es=1 00:07:50.593 19:28:37 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:50.593 19:28:37 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:50.593 19:28:37 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:50.593 19:28:37 -- app/cmdline.sh@1 -- # killprocess 71470 00:07:50.593 19:28:37 -- common/autotest_common.sh@936 -- # '[' -z 71470 ']' 00:07:50.593 19:28:37 -- common/autotest_common.sh@940 -- # kill -0 71470 00:07:50.593 19:28:37 -- common/autotest_common.sh@941 -- # uname 00:07:50.593 19:28:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:50.593 19:28:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71470 00:07:50.593 19:28:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:50.593 19:28:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:50.593 19:28:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71470' 00:07:50.593 killing process with pid 71470 00:07:50.593 19:28:37 -- common/autotest_common.sh@955 -- # kill 71470 00:07:50.593 19:28:37 -- common/autotest_common.sh@960 -- # wait 71470 00:07:51.161 00:07:51.161 real 0m2.373s 00:07:51.161 user 0m2.863s 00:07:51.161 sys 0m0.574s 00:07:51.161 19:28:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:51.161 ************************************ 00:07:51.161 END TEST app_cmdline 00:07:51.161 ************************************ 00:07:51.161 19:28:37 -- common/autotest_common.sh@10 -- # set +x 00:07:51.161 19:28:37 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:51.161 19:28:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:51.161 19:28:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:51.161 19:28:37 -- common/autotest_common.sh@10 -- # set +x 00:07:51.161 ************************************ 00:07:51.161 START TEST version 00:07:51.161 ************************************ 00:07:51.161 19:28:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:51.161 * Looking for test storage... 00:07:51.161 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:51.161 19:28:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:51.161 19:28:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:51.161 19:28:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:51.161 19:28:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:51.161 19:28:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:51.161 19:28:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:51.161 19:28:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:51.161 19:28:37 -- scripts/common.sh@335 -- # IFS=.-: 00:07:51.161 19:28:37 -- scripts/common.sh@335 -- # read -ra ver1 00:07:51.161 19:28:37 -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.161 19:28:37 -- scripts/common.sh@336 -- # read -ra ver2 00:07:51.161 19:28:37 -- scripts/common.sh@337 -- # local 'op=<' 00:07:51.161 19:28:37 -- scripts/common.sh@339 -- # ver1_l=2 00:07:51.161 19:28:37 -- scripts/common.sh@340 -- # ver2_l=1 00:07:51.161 19:28:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:51.161 19:28:37 -- scripts/common.sh@343 -- # case "$op" in 00:07:51.161 19:28:37 -- scripts/common.sh@344 -- # : 1 00:07:51.161 19:28:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:51.161 19:28:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.161 19:28:37 -- scripts/common.sh@364 -- # decimal 1 00:07:51.161 19:28:37 -- scripts/common.sh@352 -- # local d=1 00:07:51.161 19:28:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.161 19:28:38 -- scripts/common.sh@354 -- # echo 1 00:07:51.161 19:28:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:51.161 19:28:38 -- scripts/common.sh@365 -- # decimal 2 00:07:51.161 19:28:38 -- scripts/common.sh@352 -- # local d=2 00:07:51.161 19:28:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.161 19:28:38 -- scripts/common.sh@354 -- # echo 2 00:07:51.161 19:28:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:51.161 19:28:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:51.161 19:28:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:51.161 19:28:38 -- scripts/common.sh@367 -- # return 0 00:07:51.161 19:28:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.161 19:28:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:51.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.161 --rc genhtml_branch_coverage=1 00:07:51.161 --rc genhtml_function_coverage=1 00:07:51.161 --rc genhtml_legend=1 00:07:51.161 --rc geninfo_all_blocks=1 00:07:51.161 --rc geninfo_unexecuted_blocks=1 00:07:51.161 00:07:51.161 ' 00:07:51.161 19:28:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:51.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.161 --rc genhtml_branch_coverage=1 00:07:51.161 --rc genhtml_function_coverage=1 00:07:51.161 --rc genhtml_legend=1 00:07:51.161 --rc geninfo_all_blocks=1 00:07:51.161 --rc geninfo_unexecuted_blocks=1 00:07:51.161 00:07:51.161 ' 00:07:51.161 19:28:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:51.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.161 --rc genhtml_branch_coverage=1 00:07:51.161 --rc genhtml_function_coverage=1 00:07:51.161 --rc genhtml_legend=1 00:07:51.161 --rc geninfo_all_blocks=1 00:07:51.161 --rc geninfo_unexecuted_blocks=1 00:07:51.161 00:07:51.161 ' 00:07:51.161 19:28:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:51.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.161 --rc genhtml_branch_coverage=1 00:07:51.161 --rc genhtml_function_coverage=1 00:07:51.161 --rc genhtml_legend=1 00:07:51.161 --rc geninfo_all_blocks=1 00:07:51.161 --rc geninfo_unexecuted_blocks=1 00:07:51.161 00:07:51.161 ' 00:07:51.161 19:28:38 -- app/version.sh@17 -- # get_header_version major 00:07:51.161 19:28:38 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:51.161 19:28:38 -- app/version.sh@14 -- # cut -f2 00:07:51.161 19:28:38 -- app/version.sh@14 -- # tr -d '"' 00:07:51.161 19:28:38 -- app/version.sh@17 -- # major=24 00:07:51.161 19:28:38 -- app/version.sh@18 -- # get_header_version minor 00:07:51.161 19:28:38 -- app/version.sh@14 -- # cut -f2 00:07:51.161 19:28:38 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:51.161 19:28:38 -- app/version.sh@14 -- # tr -d '"' 00:07:51.161 19:28:38 -- app/version.sh@18 -- # minor=1 00:07:51.161 19:28:38 -- app/version.sh@19 -- # get_header_version patch 00:07:51.161 19:28:38 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:51.161 19:28:38 -- app/version.sh@14 -- # cut -f2 00:07:51.161 19:28:38 -- app/version.sh@14 -- # tr -d '"' 00:07:51.161 19:28:38 -- app/version.sh@19 -- # patch=1 00:07:51.161 19:28:38 -- app/version.sh@20 -- # get_header_version suffix 00:07:51.161 19:28:38 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:51.161 19:28:38 -- app/version.sh@14 -- # cut -f2 00:07:51.161 19:28:38 -- app/version.sh@14 -- # tr -d '"' 00:07:51.161 19:28:38 -- app/version.sh@20 -- # suffix=-pre 00:07:51.161 19:28:38 -- app/version.sh@22 -- # version=24.1 00:07:51.161 19:28:38 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:51.161 19:28:38 -- app/version.sh@25 -- # version=24.1.1 00:07:51.161 19:28:38 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:51.161 19:28:38 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:51.161 19:28:38 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:51.420 19:28:38 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:51.420 19:28:38 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:51.420 00:07:51.420 real 0m0.229s 00:07:51.420 user 0m0.141s 00:07:51.420 sys 0m0.125s 00:07:51.420 19:28:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:51.420 19:28:38 -- common/autotest_common.sh@10 -- # set +x 00:07:51.420 ************************************ 00:07:51.420 END TEST version 00:07:51.420 ************************************ 00:07:51.420 19:28:38 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:07:51.420 19:28:38 -- spdk/autotest.sh@191 -- # uname -s 00:07:51.420 19:28:38 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:07:51.420 19:28:38 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:51.420 19:28:38 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:51.420 19:28:38 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:07:51.420 19:28:38 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:07:51.420 19:28:38 -- spdk/autotest.sh@255 -- # timing_exit lib 00:07:51.420 19:28:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:51.420 19:28:38 -- common/autotest_common.sh@10 -- # set +x 00:07:51.420 19:28:38 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:07:51.420 19:28:38 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:07:51.420 19:28:38 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:07:51.420 19:28:38 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:07:51.420 19:28:38 -- spdk/autotest.sh@278 -- # '[' tcp = rdma ']' 00:07:51.420 19:28:38 -- spdk/autotest.sh@281 -- # '[' tcp = tcp ']' 00:07:51.420 19:28:38 -- spdk/autotest.sh@282 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:51.420 19:28:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:51.420 19:28:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:51.420 19:28:38 -- common/autotest_common.sh@10 -- # set +x 00:07:51.420 ************************************ 00:07:51.420 START TEST nvmf_tcp 00:07:51.420 ************************************ 00:07:51.420 19:28:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:51.420 * Looking for test storage... 00:07:51.420 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:51.420 19:28:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:51.420 19:28:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:51.420 19:28:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:51.680 19:28:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:51.680 19:28:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:51.680 19:28:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:51.680 19:28:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:51.680 19:28:38 -- scripts/common.sh@335 -- # IFS=.-: 00:07:51.680 19:28:38 -- scripts/common.sh@335 -- # read -ra ver1 00:07:51.680 19:28:38 -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.680 19:28:38 -- scripts/common.sh@336 -- # read -ra ver2 00:07:51.680 19:28:38 -- scripts/common.sh@337 -- # local 'op=<' 00:07:51.680 19:28:38 -- scripts/common.sh@339 -- # ver1_l=2 00:07:51.680 19:28:38 -- scripts/common.sh@340 -- # ver2_l=1 00:07:51.680 19:28:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:51.680 19:28:38 -- scripts/common.sh@343 -- # case "$op" in 00:07:51.680 19:28:38 -- scripts/common.sh@344 -- # : 1 00:07:51.680 19:28:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:51.680 19:28:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.680 19:28:38 -- scripts/common.sh@364 -- # decimal 1 00:07:51.680 19:28:38 -- scripts/common.sh@352 -- # local d=1 00:07:51.680 19:28:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.680 19:28:38 -- scripts/common.sh@354 -- # echo 1 00:07:51.680 19:28:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:51.680 19:28:38 -- scripts/common.sh@365 -- # decimal 2 00:07:51.680 19:28:38 -- scripts/common.sh@352 -- # local d=2 00:07:51.680 19:28:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.680 19:28:38 -- scripts/common.sh@354 -- # echo 2 00:07:51.680 19:28:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:51.680 19:28:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:51.680 19:28:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:51.680 19:28:38 -- scripts/common.sh@367 -- # return 0 00:07:51.680 19:28:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.680 19:28:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:51.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.680 --rc genhtml_branch_coverage=1 00:07:51.680 --rc genhtml_function_coverage=1 00:07:51.680 --rc genhtml_legend=1 00:07:51.680 --rc geninfo_all_blocks=1 00:07:51.680 --rc geninfo_unexecuted_blocks=1 00:07:51.680 00:07:51.680 ' 00:07:51.680 19:28:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:51.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.680 --rc genhtml_branch_coverage=1 00:07:51.680 --rc genhtml_function_coverage=1 00:07:51.680 --rc genhtml_legend=1 00:07:51.680 --rc geninfo_all_blocks=1 00:07:51.680 --rc geninfo_unexecuted_blocks=1 00:07:51.680 00:07:51.680 ' 00:07:51.680 19:28:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:51.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.680 --rc genhtml_branch_coverage=1 00:07:51.680 --rc genhtml_function_coverage=1 00:07:51.680 --rc genhtml_legend=1 00:07:51.680 --rc geninfo_all_blocks=1 00:07:51.680 --rc geninfo_unexecuted_blocks=1 00:07:51.680 00:07:51.680 ' 00:07:51.680 19:28:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:51.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.680 --rc genhtml_branch_coverage=1 00:07:51.680 --rc genhtml_function_coverage=1 00:07:51.680 --rc genhtml_legend=1 00:07:51.680 --rc geninfo_all_blocks=1 00:07:51.680 --rc geninfo_unexecuted_blocks=1 00:07:51.680 00:07:51.680 ' 00:07:51.680 19:28:38 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:51.680 19:28:38 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:51.680 19:28:38 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:51.680 19:28:38 -- nvmf/common.sh@7 -- # uname -s 00:07:51.680 19:28:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.680 19:28:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.680 19:28:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.680 19:28:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.680 19:28:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.680 19:28:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.680 19:28:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.680 19:28:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.680 19:28:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.680 19:28:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.680 19:28:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:07:51.680 19:28:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:07:51.680 19:28:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.680 19:28:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.680 19:28:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:51.680 19:28:38 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:51.680 19:28:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.680 19:28:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.680 19:28:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.680 19:28:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.680 19:28:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.681 19:28:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.681 19:28:38 -- paths/export.sh@5 -- # export PATH 00:07:51.681 19:28:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.681 19:28:38 -- nvmf/common.sh@46 -- # : 0 00:07:51.681 19:28:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:51.681 19:28:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:51.681 19:28:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:51.681 19:28:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.681 19:28:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.681 19:28:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:51.681 19:28:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:51.681 19:28:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:51.681 19:28:38 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:51.681 19:28:38 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:51.681 19:28:38 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:51.681 19:28:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:51.681 19:28:38 -- common/autotest_common.sh@10 -- # set +x 00:07:51.681 19:28:38 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:51.681 19:28:38 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:51.681 19:28:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:51.681 19:28:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:51.681 19:28:38 -- common/autotest_common.sh@10 -- # set +x 00:07:51.681 ************************************ 00:07:51.681 START TEST nvmf_example 00:07:51.681 ************************************ 00:07:51.681 19:28:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:51.681 * Looking for test storage... 00:07:51.681 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:51.681 19:28:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:51.681 19:28:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:51.681 19:28:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:51.681 19:28:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:51.681 19:28:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:51.681 19:28:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:51.681 19:28:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:51.681 19:28:38 -- scripts/common.sh@335 -- # IFS=.-: 00:07:51.681 19:28:38 -- scripts/common.sh@335 -- # read -ra ver1 00:07:51.681 19:28:38 -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.681 19:28:38 -- scripts/common.sh@336 -- # read -ra ver2 00:07:51.681 19:28:38 -- scripts/common.sh@337 -- # local 'op=<' 00:07:51.681 19:28:38 -- scripts/common.sh@339 -- # ver1_l=2 00:07:51.681 19:28:38 -- scripts/common.sh@340 -- # ver2_l=1 00:07:51.681 19:28:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:51.681 19:28:38 -- scripts/common.sh@343 -- # case "$op" in 00:07:51.681 19:28:38 -- scripts/common.sh@344 -- # : 1 00:07:51.681 19:28:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:51.681 19:28:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.681 19:28:38 -- scripts/common.sh@364 -- # decimal 1 00:07:51.681 19:28:38 -- scripts/common.sh@352 -- # local d=1 00:07:51.681 19:28:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.681 19:28:38 -- scripts/common.sh@354 -- # echo 1 00:07:51.681 19:28:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:51.681 19:28:38 -- scripts/common.sh@365 -- # decimal 2 00:07:51.941 19:28:38 -- scripts/common.sh@352 -- # local d=2 00:07:51.941 19:28:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.941 19:28:38 -- scripts/common.sh@354 -- # echo 2 00:07:51.941 19:28:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:51.941 19:28:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:51.941 19:28:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:51.941 19:28:38 -- scripts/common.sh@367 -- # return 0 00:07:51.941 19:28:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.941 19:28:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:51.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.941 --rc genhtml_branch_coverage=1 00:07:51.941 --rc genhtml_function_coverage=1 00:07:51.941 --rc genhtml_legend=1 00:07:51.941 --rc geninfo_all_blocks=1 00:07:51.941 --rc geninfo_unexecuted_blocks=1 00:07:51.941 00:07:51.941 ' 00:07:51.941 19:28:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:51.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.941 --rc genhtml_branch_coverage=1 00:07:51.941 --rc genhtml_function_coverage=1 00:07:51.941 --rc genhtml_legend=1 00:07:51.941 --rc geninfo_all_blocks=1 00:07:51.941 --rc geninfo_unexecuted_blocks=1 00:07:51.941 00:07:51.941 ' 00:07:51.941 19:28:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:51.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.941 --rc genhtml_branch_coverage=1 00:07:51.941 --rc genhtml_function_coverage=1 00:07:51.941 --rc genhtml_legend=1 00:07:51.941 --rc geninfo_all_blocks=1 00:07:51.941 --rc geninfo_unexecuted_blocks=1 00:07:51.941 00:07:51.941 ' 00:07:51.941 19:28:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:51.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.941 --rc genhtml_branch_coverage=1 00:07:51.941 --rc genhtml_function_coverage=1 00:07:51.941 --rc genhtml_legend=1 00:07:51.941 --rc geninfo_all_blocks=1 00:07:51.941 --rc geninfo_unexecuted_blocks=1 00:07:51.941 00:07:51.941 ' 00:07:51.941 19:28:38 -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:51.941 19:28:38 -- nvmf/common.sh@7 -- # uname -s 00:07:51.941 19:28:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.941 19:28:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.941 19:28:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.941 19:28:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.941 19:28:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.941 19:28:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.941 19:28:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.941 19:28:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.941 19:28:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.941 19:28:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.941 19:28:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:07:51.941 19:28:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:07:51.941 19:28:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.941 19:28:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.941 19:28:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:51.941 19:28:38 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:51.941 19:28:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.941 19:28:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.941 19:28:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.941 19:28:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.941 19:28:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.941 19:28:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.941 19:28:38 -- paths/export.sh@5 -- # export PATH 00:07:51.941 19:28:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.941 19:28:38 -- nvmf/common.sh@46 -- # : 0 00:07:51.941 19:28:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:51.941 19:28:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:51.941 19:28:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:51.941 19:28:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.941 19:28:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.941 19:28:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:51.941 19:28:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:51.941 19:28:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:51.941 19:28:38 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:51.941 19:28:38 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:51.941 19:28:38 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:51.941 19:28:38 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:51.941 19:28:38 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:51.942 19:28:38 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:51.942 19:28:38 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:51.942 19:28:38 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:51.942 19:28:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:51.942 19:28:38 -- common/autotest_common.sh@10 -- # set +x 00:07:51.942 19:28:38 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:51.942 19:28:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:51.942 19:28:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:51.942 19:28:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:51.942 19:28:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:51.942 19:28:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:51.942 19:28:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.942 19:28:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:51.942 19:28:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.942 19:28:38 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:51.942 19:28:38 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:51.942 19:28:38 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:51.942 19:28:38 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:51.942 19:28:38 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:51.942 19:28:38 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:51.942 19:28:38 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:51.942 19:28:38 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:51.942 19:28:38 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:51.942 19:28:38 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:51.942 19:28:38 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:51.942 19:28:38 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:51.942 19:28:38 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:51.942 19:28:38 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:51.942 19:28:38 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:51.942 19:28:38 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:51.942 19:28:38 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:51.942 19:28:38 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:51.942 19:28:38 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:51.942 Cannot find device "nvmf_init_br" 00:07:51.942 19:28:38 -- nvmf/common.sh@153 -- # true 00:07:51.942 19:28:38 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:51.942 Cannot find device "nvmf_tgt_br" 00:07:51.942 19:28:38 -- nvmf/common.sh@154 -- # true 00:07:51.942 19:28:38 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:51.942 Cannot find device "nvmf_tgt_br2" 00:07:51.942 19:28:38 -- nvmf/common.sh@155 -- # true 00:07:51.942 19:28:38 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:51.942 Cannot find device "nvmf_init_br" 00:07:51.942 19:28:38 -- nvmf/common.sh@156 -- # true 00:07:51.942 19:28:38 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:51.942 Cannot find device "nvmf_tgt_br" 00:07:51.942 19:28:38 -- nvmf/common.sh@157 -- # true 00:07:51.942 19:28:38 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:51.942 Cannot find device "nvmf_tgt_br2" 00:07:51.942 19:28:38 -- nvmf/common.sh@158 -- # true 00:07:51.942 19:28:38 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:51.942 Cannot find device "nvmf_br" 00:07:51.942 19:28:38 -- nvmf/common.sh@159 -- # true 00:07:51.942 19:28:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:51.942 Cannot find device "nvmf_init_if" 00:07:51.942 19:28:38 -- nvmf/common.sh@160 -- # true 00:07:51.942 19:28:38 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:51.942 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:51.942 19:28:38 -- nvmf/common.sh@161 -- # true 00:07:51.942 19:28:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:51.942 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:51.942 19:28:38 -- nvmf/common.sh@162 -- # true 00:07:51.942 19:28:38 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:51.942 19:28:38 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:51.942 19:28:38 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:51.942 19:28:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:51.942 19:28:38 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:51.942 19:28:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:51.942 19:28:38 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:51.942 19:28:38 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:51.942 19:28:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:51.942 19:28:38 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:51.942 19:28:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:51.942 19:28:38 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:51.942 19:28:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:51.942 19:28:38 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:52.201 19:28:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:52.201 19:28:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:52.201 19:28:38 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:52.201 19:28:38 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:52.201 19:28:38 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:52.201 19:28:38 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:52.201 19:28:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:52.201 19:28:38 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:52.201 19:28:38 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:52.201 19:28:38 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:52.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:52.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:07:52.201 00:07:52.201 --- 10.0.0.2 ping statistics --- 00:07:52.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.201 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:07:52.201 19:28:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:52.201 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:52.201 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:07:52.201 00:07:52.201 --- 10.0.0.3 ping statistics --- 00:07:52.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.201 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:07:52.201 19:28:38 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:52.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:52.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:07:52.201 00:07:52.201 --- 10.0.0.1 ping statistics --- 00:07:52.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.201 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:07:52.201 19:28:38 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:52.201 19:28:38 -- nvmf/common.sh@421 -- # return 0 00:07:52.201 19:28:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:52.201 19:28:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:52.201 19:28:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:52.201 19:28:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:52.201 19:28:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:52.201 19:28:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:52.201 19:28:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:52.201 19:28:39 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:52.201 19:28:39 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:52.201 19:28:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:52.201 19:28:39 -- common/autotest_common.sh@10 -- # set +x 00:07:52.201 19:28:39 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:52.201 19:28:39 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:52.201 19:28:39 -- target/nvmf_example.sh@34 -- # nvmfpid=71858 00:07:52.201 19:28:39 -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:52.201 19:28:39 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:52.201 19:28:39 -- target/nvmf_example.sh@36 -- # waitforlisten 71858 00:07:52.201 19:28:39 -- common/autotest_common.sh@829 -- # '[' -z 71858 ']' 00:07:52.201 19:28:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.201 19:28:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:52.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.201 19:28:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.201 19:28:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:52.201 19:28:39 -- common/autotest_common.sh@10 -- # set +x 00:07:53.578 19:28:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:53.578 19:28:40 -- common/autotest_common.sh@862 -- # return 0 00:07:53.578 19:28:40 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:53.578 19:28:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:53.578 19:28:40 -- common/autotest_common.sh@10 -- # set +x 00:07:53.578 19:28:40 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:53.578 19:28:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.578 19:28:40 -- common/autotest_common.sh@10 -- # set +x 00:07:53.578 19:28:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.578 19:28:40 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:53.578 19:28:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.578 19:28:40 -- common/autotest_common.sh@10 -- # set +x 00:07:53.578 19:28:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.578 19:28:40 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:53.578 19:28:40 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:53.578 19:28:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.578 19:28:40 -- common/autotest_common.sh@10 -- # set +x 00:07:53.578 19:28:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.578 19:28:40 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:53.578 19:28:40 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:53.578 19:28:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.578 19:28:40 -- common/autotest_common.sh@10 -- # set +x 00:07:53.578 19:28:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.578 19:28:40 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:53.578 19:28:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.579 19:28:40 -- common/autotest_common.sh@10 -- # set +x 00:07:53.579 19:28:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.579 19:28:40 -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:07:53.579 19:28:40 -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:05.786 Initializing NVMe Controllers 00:08:05.786 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:05.786 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:05.786 Initialization complete. Launching workers. 00:08:05.786 ======================================================== 00:08:05.786 Latency(us) 00:08:05.786 Device Information : IOPS MiB/s Average min max 00:08:05.786 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16026.20 62.60 3994.62 604.62 20213.85 00:08:05.786 ======================================================== 00:08:05.786 Total : 16026.20 62.60 3994.62 604.62 20213.85 00:08:05.786 00:08:05.786 19:28:50 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:05.786 19:28:50 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:05.786 19:28:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:05.786 19:28:50 -- nvmf/common.sh@116 -- # sync 00:08:05.786 19:28:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:05.786 19:28:50 -- nvmf/common.sh@119 -- # set +e 00:08:05.786 19:28:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:05.786 19:28:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:05.786 rmmod nvme_tcp 00:08:05.786 rmmod nvme_fabrics 00:08:05.786 rmmod nvme_keyring 00:08:05.786 19:28:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:05.786 19:28:50 -- nvmf/common.sh@123 -- # set -e 00:08:05.786 19:28:50 -- nvmf/common.sh@124 -- # return 0 00:08:05.786 19:28:50 -- nvmf/common.sh@477 -- # '[' -n 71858 ']' 00:08:05.786 19:28:50 -- nvmf/common.sh@478 -- # killprocess 71858 00:08:05.786 19:28:50 -- common/autotest_common.sh@936 -- # '[' -z 71858 ']' 00:08:05.786 19:28:50 -- common/autotest_common.sh@940 -- # kill -0 71858 00:08:05.786 19:28:50 -- common/autotest_common.sh@941 -- # uname 00:08:05.786 19:28:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:05.786 19:28:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71858 00:08:05.786 19:28:50 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:08:05.786 19:28:50 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:08:05.786 killing process with pid 71858 00:08:05.786 19:28:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71858' 00:08:05.786 19:28:50 -- common/autotest_common.sh@955 -- # kill 71858 00:08:05.786 19:28:50 -- common/autotest_common.sh@960 -- # wait 71858 00:08:05.786 nvmf threads initialize successfully 00:08:05.786 bdev subsystem init successfully 00:08:05.786 created a nvmf target service 00:08:05.786 create targets's poll groups done 00:08:05.786 all subsystems of target started 00:08:05.786 nvmf target is running 00:08:05.786 all subsystems of target stopped 00:08:05.786 destroy targets's poll groups done 00:08:05.786 destroyed the nvmf target service 00:08:05.786 bdev subsystem finish successfully 00:08:05.786 nvmf threads destroy successfully 00:08:05.786 19:28:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:05.786 19:28:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:05.786 19:28:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:05.786 19:28:50 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:05.786 19:28:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:05.786 19:28:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.786 19:28:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:05.786 19:28:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:05.786 19:28:50 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:05.786 19:28:50 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:05.786 19:28:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:05.786 19:28:50 -- common/autotest_common.sh@10 -- # set +x 00:08:05.786 00:08:05.786 real 0m12.570s 00:08:05.786 user 0m45.036s 00:08:05.786 sys 0m2.016s 00:08:05.786 19:28:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:05.786 ************************************ 00:08:05.786 END TEST nvmf_example 00:08:05.786 ************************************ 00:08:05.786 19:28:50 -- common/autotest_common.sh@10 -- # set +x 00:08:05.786 19:28:51 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:05.786 19:28:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:05.786 19:28:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:05.786 19:28:51 -- common/autotest_common.sh@10 -- # set +x 00:08:05.786 ************************************ 00:08:05.786 START TEST nvmf_filesystem 00:08:05.786 ************************************ 00:08:05.786 19:28:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:05.786 * Looking for test storage... 00:08:05.786 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:05.786 19:28:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:05.786 19:28:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:05.786 19:28:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:05.786 19:28:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:05.786 19:28:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:05.786 19:28:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:05.786 19:28:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:05.786 19:28:51 -- scripts/common.sh@335 -- # IFS=.-: 00:08:05.786 19:28:51 -- scripts/common.sh@335 -- # read -ra ver1 00:08:05.786 19:28:51 -- scripts/common.sh@336 -- # IFS=.-: 00:08:05.787 19:28:51 -- scripts/common.sh@336 -- # read -ra ver2 00:08:05.787 19:28:51 -- scripts/common.sh@337 -- # local 'op=<' 00:08:05.787 19:28:51 -- scripts/common.sh@339 -- # ver1_l=2 00:08:05.787 19:28:51 -- scripts/common.sh@340 -- # ver2_l=1 00:08:05.787 19:28:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:05.787 19:28:51 -- scripts/common.sh@343 -- # case "$op" in 00:08:05.787 19:28:51 -- scripts/common.sh@344 -- # : 1 00:08:05.787 19:28:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:05.787 19:28:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:05.787 19:28:51 -- scripts/common.sh@364 -- # decimal 1 00:08:05.787 19:28:51 -- scripts/common.sh@352 -- # local d=1 00:08:05.787 19:28:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:05.787 19:28:51 -- scripts/common.sh@354 -- # echo 1 00:08:05.787 19:28:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:05.787 19:28:51 -- scripts/common.sh@365 -- # decimal 2 00:08:05.787 19:28:51 -- scripts/common.sh@352 -- # local d=2 00:08:05.787 19:28:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:05.787 19:28:51 -- scripts/common.sh@354 -- # echo 2 00:08:05.787 19:28:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:05.787 19:28:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:05.787 19:28:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:05.787 19:28:51 -- scripts/common.sh@367 -- # return 0 00:08:05.787 19:28:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:05.787 19:28:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:05.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.787 --rc genhtml_branch_coverage=1 00:08:05.787 --rc genhtml_function_coverage=1 00:08:05.787 --rc genhtml_legend=1 00:08:05.787 --rc geninfo_all_blocks=1 00:08:05.787 --rc geninfo_unexecuted_blocks=1 00:08:05.787 00:08:05.787 ' 00:08:05.787 19:28:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:05.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.787 --rc genhtml_branch_coverage=1 00:08:05.787 --rc genhtml_function_coverage=1 00:08:05.787 --rc genhtml_legend=1 00:08:05.787 --rc geninfo_all_blocks=1 00:08:05.787 --rc geninfo_unexecuted_blocks=1 00:08:05.787 00:08:05.787 ' 00:08:05.787 19:28:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:05.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.787 --rc genhtml_branch_coverage=1 00:08:05.787 --rc genhtml_function_coverage=1 00:08:05.787 --rc genhtml_legend=1 00:08:05.787 --rc geninfo_all_blocks=1 00:08:05.787 --rc geninfo_unexecuted_blocks=1 00:08:05.787 00:08:05.787 ' 00:08:05.787 19:28:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:05.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.787 --rc genhtml_branch_coverage=1 00:08:05.787 --rc genhtml_function_coverage=1 00:08:05.787 --rc genhtml_legend=1 00:08:05.787 --rc geninfo_all_blocks=1 00:08:05.787 --rc geninfo_unexecuted_blocks=1 00:08:05.787 00:08:05.787 ' 00:08:05.787 19:28:51 -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:08:05.787 19:28:51 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:05.787 19:28:51 -- common/autotest_common.sh@34 -- # set -e 00:08:05.787 19:28:51 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:05.787 19:28:51 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:05.787 19:28:51 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:08:05.787 19:28:51 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:08:05.787 19:28:51 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:05.787 19:28:51 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:05.787 19:28:51 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:05.787 19:28:51 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:05.787 19:28:51 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:08:05.787 19:28:51 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:05.787 19:28:51 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:05.787 19:28:51 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:05.787 19:28:51 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:05.787 19:28:51 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:05.787 19:28:51 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:05.787 19:28:51 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:05.787 19:28:51 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:05.787 19:28:51 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:05.787 19:28:51 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:05.787 19:28:51 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:05.787 19:28:51 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:05.787 19:28:51 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:05.787 19:28:51 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:05.787 19:28:51 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:05.787 19:28:51 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:05.787 19:28:51 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:05.787 19:28:51 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:05.787 19:28:51 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:05.787 19:28:51 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:05.787 19:28:51 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:05.787 19:28:51 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:05.787 19:28:51 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:05.787 19:28:51 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:05.787 19:28:51 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:05.787 19:28:51 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:05.787 19:28:51 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:05.787 19:28:51 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:05.787 19:28:51 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:05.787 19:28:51 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:05.787 19:28:51 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:08:05.787 19:28:51 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:05.787 19:28:51 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:05.787 19:28:51 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:05.787 19:28:51 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:05.787 19:28:51 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:08:05.787 19:28:51 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:05.787 19:28:51 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:05.787 19:28:51 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:05.787 19:28:51 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:05.787 19:28:51 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:05.787 19:28:51 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:05.787 19:28:51 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:05.787 19:28:51 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:05.787 19:28:51 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:05.787 19:28:51 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:08:05.787 19:28:51 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:05.787 19:28:51 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:08:05.787 19:28:51 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:08:05.787 19:28:51 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:08:05.787 19:28:51 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:08:05.787 19:28:51 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:08:05.787 19:28:51 -- common/build_config.sh@58 -- # CONFIG_GOLANG=y 00:08:05.787 19:28:51 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:08:05.787 19:28:51 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:08:05.787 19:28:51 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:05.787 19:28:51 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:08:05.787 19:28:51 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:08:05.787 19:28:51 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:08:05.787 19:28:51 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:08:05.787 19:28:51 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:05.787 19:28:51 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:08:05.787 19:28:51 -- common/build_config.sh@68 -- # CONFIG_AVAHI=y 00:08:05.787 19:28:51 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:08:05.787 19:28:51 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:08:05.787 19:28:51 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:08:05.787 19:28:51 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:08:05.787 19:28:51 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:08:05.787 19:28:51 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:08:05.787 19:28:51 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:08:05.787 19:28:51 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:08:05.787 19:28:51 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:05.787 19:28:51 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:08:05.787 19:28:51 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:08:05.787 19:28:51 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:08:05.787 19:28:51 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:08:05.787 19:28:51 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:08:05.787 19:28:51 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:08:05.787 19:28:51 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:08:05.787 19:28:51 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:08:05.787 19:28:51 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:08:05.787 19:28:51 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:08:05.787 19:28:51 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:05.787 19:28:51 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:05.787 19:28:51 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:05.787 19:28:51 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:05.787 19:28:51 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:05.787 19:28:51 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:05.787 19:28:51 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:08:05.787 19:28:51 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:05.787 #define SPDK_CONFIG_H 00:08:05.787 #define SPDK_CONFIG_APPS 1 00:08:05.787 #define SPDK_CONFIG_ARCH native 00:08:05.787 #undef SPDK_CONFIG_ASAN 00:08:05.787 #define SPDK_CONFIG_AVAHI 1 00:08:05.787 #undef SPDK_CONFIG_CET 00:08:05.787 #define SPDK_CONFIG_COVERAGE 1 00:08:05.787 #define SPDK_CONFIG_CROSS_PREFIX 00:08:05.787 #undef SPDK_CONFIG_CRYPTO 00:08:05.787 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:05.788 #undef SPDK_CONFIG_CUSTOMOCF 00:08:05.788 #undef SPDK_CONFIG_DAOS 00:08:05.788 #define SPDK_CONFIG_DAOS_DIR 00:08:05.788 #define SPDK_CONFIG_DEBUG 1 00:08:05.788 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:05.788 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:08:05.788 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:08:05.788 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:08:05.788 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:05.788 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:05.788 #define SPDK_CONFIG_EXAMPLES 1 00:08:05.788 #undef SPDK_CONFIG_FC 00:08:05.788 #define SPDK_CONFIG_FC_PATH 00:08:05.788 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:05.788 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:05.788 #undef SPDK_CONFIG_FUSE 00:08:05.788 #undef SPDK_CONFIG_FUZZER 00:08:05.788 #define SPDK_CONFIG_FUZZER_LIB 00:08:05.788 #define SPDK_CONFIG_GOLANG 1 00:08:05.788 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:05.788 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:05.788 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:05.788 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:05.788 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:05.788 #define SPDK_CONFIG_IDXD 1 00:08:05.788 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:05.788 #undef SPDK_CONFIG_IPSEC_MB 00:08:05.788 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:05.788 #define SPDK_CONFIG_ISAL 1 00:08:05.788 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:05.788 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:05.788 #define SPDK_CONFIG_LIBDIR 00:08:05.788 #undef SPDK_CONFIG_LTO 00:08:05.788 #define SPDK_CONFIG_MAX_LCORES 00:08:05.788 #define SPDK_CONFIG_NVME_CUSE 1 00:08:05.788 #undef SPDK_CONFIG_OCF 00:08:05.788 #define SPDK_CONFIG_OCF_PATH 00:08:05.788 #define SPDK_CONFIG_OPENSSL_PATH 00:08:05.788 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:05.788 #undef SPDK_CONFIG_PGO_USE 00:08:05.788 #define SPDK_CONFIG_PREFIX /usr/local 00:08:05.788 #undef SPDK_CONFIG_RAID5F 00:08:05.788 #undef SPDK_CONFIG_RBD 00:08:05.788 #define SPDK_CONFIG_RDMA 1 00:08:05.788 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:05.788 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:05.788 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:05.788 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:05.788 #define SPDK_CONFIG_SHARED 1 00:08:05.788 #undef SPDK_CONFIG_SMA 00:08:05.788 #define SPDK_CONFIG_TESTS 1 00:08:05.788 #undef SPDK_CONFIG_TSAN 00:08:05.788 #define SPDK_CONFIG_UBLK 1 00:08:05.788 #define SPDK_CONFIG_UBSAN 1 00:08:05.788 #undef SPDK_CONFIG_UNIT_TESTS 00:08:05.788 #undef SPDK_CONFIG_URING 00:08:05.788 #define SPDK_CONFIG_URING_PATH 00:08:05.788 #undef SPDK_CONFIG_URING_ZNS 00:08:05.788 #define SPDK_CONFIG_USDT 1 00:08:05.788 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:05.788 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:05.788 #undef SPDK_CONFIG_VFIO_USER 00:08:05.788 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:05.788 #define SPDK_CONFIG_VHOST 1 00:08:05.788 #define SPDK_CONFIG_VIRTIO 1 00:08:05.788 #undef SPDK_CONFIG_VTUNE 00:08:05.788 #define SPDK_CONFIG_VTUNE_DIR 00:08:05.788 #define SPDK_CONFIG_WERROR 1 00:08:05.788 #define SPDK_CONFIG_WPDK_DIR 00:08:05.788 #undef SPDK_CONFIG_XNVME 00:08:05.788 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:05.788 19:28:51 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:05.788 19:28:51 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:05.788 19:28:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:05.788 19:28:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:05.788 19:28:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:05.788 19:28:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.788 19:28:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.788 19:28:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.788 19:28:51 -- paths/export.sh@5 -- # export PATH 00:08:05.788 19:28:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.788 19:28:51 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:08:05.788 19:28:51 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:08:05.788 19:28:51 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:08:05.788 19:28:51 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:08:05.788 19:28:51 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:08:05.788 19:28:51 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:08:05.788 19:28:51 -- pm/common@16 -- # TEST_TAG=N/A 00:08:05.788 19:28:51 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:08:05.788 19:28:51 -- common/autotest_common.sh@52 -- # : 1 00:08:05.788 19:28:51 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:08:05.788 19:28:51 -- common/autotest_common.sh@56 -- # : 0 00:08:05.788 19:28:51 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:05.788 19:28:51 -- common/autotest_common.sh@58 -- # : 0 00:08:05.788 19:28:51 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:08:05.788 19:28:51 -- common/autotest_common.sh@60 -- # : 1 00:08:05.788 19:28:51 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:05.788 19:28:51 -- common/autotest_common.sh@62 -- # : 0 00:08:05.788 19:28:51 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:08:05.788 19:28:51 -- common/autotest_common.sh@64 -- # : 00:08:05.788 19:28:51 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:08:05.788 19:28:51 -- common/autotest_common.sh@66 -- # : 0 00:08:05.788 19:28:51 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:08:05.788 19:28:51 -- common/autotest_common.sh@68 -- # : 0 00:08:05.788 19:28:51 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:08:05.788 19:28:51 -- common/autotest_common.sh@70 -- # : 0 00:08:05.788 19:28:51 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:08:05.788 19:28:51 -- common/autotest_common.sh@72 -- # : 0 00:08:05.788 19:28:51 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:05.788 19:28:51 -- common/autotest_common.sh@74 -- # : 0 00:08:05.788 19:28:51 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:08:05.788 19:28:51 -- common/autotest_common.sh@76 -- # : 0 00:08:05.788 19:28:51 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:08:05.788 19:28:51 -- common/autotest_common.sh@78 -- # : 0 00:08:05.788 19:28:51 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:08:05.788 19:28:51 -- common/autotest_common.sh@80 -- # : 0 00:08:05.788 19:28:51 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:08:05.788 19:28:51 -- common/autotest_common.sh@82 -- # : 0 00:08:05.788 19:28:51 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:08:05.788 19:28:51 -- common/autotest_common.sh@84 -- # : 0 00:08:05.788 19:28:51 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:08:05.788 19:28:51 -- common/autotest_common.sh@86 -- # : 1 00:08:05.788 19:28:51 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:08:05.788 19:28:51 -- common/autotest_common.sh@88 -- # : 0 00:08:05.788 19:28:51 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:08:05.788 19:28:51 -- common/autotest_common.sh@90 -- # : 0 00:08:05.788 19:28:51 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:05.788 19:28:51 -- common/autotest_common.sh@92 -- # : 0 00:08:05.788 19:28:51 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:08:05.788 19:28:51 -- common/autotest_common.sh@94 -- # : 0 00:08:05.788 19:28:51 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:08:05.788 19:28:51 -- common/autotest_common.sh@96 -- # : tcp 00:08:05.788 19:28:51 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:05.788 19:28:51 -- common/autotest_common.sh@98 -- # : 0 00:08:05.788 19:28:51 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:08:05.788 19:28:51 -- common/autotest_common.sh@100 -- # : 0 00:08:05.788 19:28:51 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:08:05.788 19:28:51 -- common/autotest_common.sh@102 -- # : 0 00:08:05.788 19:28:51 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:08:05.788 19:28:51 -- common/autotest_common.sh@104 -- # : 0 00:08:05.788 19:28:51 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:08:05.788 19:28:51 -- common/autotest_common.sh@106 -- # : 0 00:08:05.788 19:28:51 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:08:05.788 19:28:51 -- common/autotest_common.sh@108 -- # : 0 00:08:05.788 19:28:51 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:08:05.788 19:28:51 -- common/autotest_common.sh@110 -- # : 0 00:08:05.788 19:28:51 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:08:05.788 19:28:51 -- common/autotest_common.sh@112 -- # : 0 00:08:05.788 19:28:51 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:05.788 19:28:51 -- common/autotest_common.sh@114 -- # : 0 00:08:05.788 19:28:51 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:08:05.788 19:28:51 -- common/autotest_common.sh@116 -- # : 1 00:08:05.788 19:28:51 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:08:05.788 19:28:51 -- common/autotest_common.sh@118 -- # : /home/vagrant/spdk_repo/dpdk/build 00:08:05.788 19:28:51 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:05.788 19:28:51 -- common/autotest_common.sh@120 -- # : 0 00:08:05.788 19:28:51 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:08:05.788 19:28:51 -- common/autotest_common.sh@122 -- # : 0 00:08:05.788 19:28:51 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:08:05.788 19:28:51 -- common/autotest_common.sh@124 -- # : 0 00:08:05.788 19:28:51 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:08:05.789 19:28:51 -- common/autotest_common.sh@126 -- # : 0 00:08:05.789 19:28:51 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:08:05.789 19:28:51 -- common/autotest_common.sh@128 -- # : 0 00:08:05.789 19:28:51 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:08:05.789 19:28:51 -- common/autotest_common.sh@130 -- # : 0 00:08:05.789 19:28:51 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:08:05.789 19:28:51 -- common/autotest_common.sh@132 -- # : v22.11.4 00:08:05.789 19:28:51 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:08:05.789 19:28:51 -- common/autotest_common.sh@134 -- # : true 00:08:05.789 19:28:51 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:08:05.789 19:28:51 -- common/autotest_common.sh@136 -- # : 0 00:08:05.789 19:28:51 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:08:05.789 19:28:51 -- common/autotest_common.sh@138 -- # : 0 00:08:05.789 19:28:51 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:08:05.789 19:28:51 -- common/autotest_common.sh@140 -- # : 1 00:08:05.789 19:28:51 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:08:05.789 19:28:51 -- common/autotest_common.sh@142 -- # : 0 00:08:05.789 19:28:51 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:08:05.789 19:28:51 -- common/autotest_common.sh@144 -- # : 0 00:08:05.789 19:28:51 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:08:05.789 19:28:51 -- common/autotest_common.sh@146 -- # : 0 00:08:05.789 19:28:51 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:08:05.789 19:28:51 -- common/autotest_common.sh@148 -- # : 00:08:05.789 19:28:51 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:08:05.789 19:28:51 -- common/autotest_common.sh@150 -- # : 0 00:08:05.789 19:28:51 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:08:05.789 19:28:51 -- common/autotest_common.sh@152 -- # : 0 00:08:05.789 19:28:51 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:08:05.789 19:28:51 -- common/autotest_common.sh@154 -- # : 0 00:08:05.789 19:28:51 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:08:05.789 19:28:51 -- common/autotest_common.sh@156 -- # : 0 00:08:05.789 19:28:51 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:08:05.789 19:28:51 -- common/autotest_common.sh@158 -- # : 0 00:08:05.789 19:28:51 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:08:05.789 19:28:51 -- common/autotest_common.sh@160 -- # : 0 00:08:05.789 19:28:51 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:08:05.789 19:28:51 -- common/autotest_common.sh@163 -- # : 00:08:05.789 19:28:51 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:08:05.789 19:28:51 -- common/autotest_common.sh@165 -- # : 1 00:08:05.789 19:28:51 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:08:05.789 19:28:51 -- common/autotest_common.sh@167 -- # : 1 00:08:05.789 19:28:51 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:05.789 19:28:51 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:08:05.789 19:28:51 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:08:05.789 19:28:51 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:05.789 19:28:51 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:05.789 19:28:51 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:05.789 19:28:51 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:05.789 19:28:51 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:05.789 19:28:51 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:05.789 19:28:51 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:05.789 19:28:51 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:05.789 19:28:51 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:05.789 19:28:51 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:05.789 19:28:51 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:05.789 19:28:51 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:08:05.789 19:28:51 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:05.789 19:28:51 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:05.789 19:28:51 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:05.789 19:28:51 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:05.789 19:28:51 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:05.789 19:28:51 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:08:05.789 19:28:51 -- common/autotest_common.sh@196 -- # cat 00:08:05.789 19:28:51 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:08:05.789 19:28:51 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:05.789 19:28:51 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:05.789 19:28:51 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:05.789 19:28:51 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:05.789 19:28:51 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:08:05.789 19:28:51 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:08:05.789 19:28:51 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:08:05.789 19:28:51 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:08:05.789 19:28:51 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:08:05.789 19:28:51 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:08:05.789 19:28:51 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:05.789 19:28:51 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:05.789 19:28:51 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:05.789 19:28:51 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:05.789 19:28:51 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:08:05.789 19:28:51 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:08:05.789 19:28:51 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:05.789 19:28:51 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:05.789 19:28:51 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:08:05.789 19:28:51 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:08:05.789 19:28:51 -- common/autotest_common.sh@249 -- # _LCOV= 00:08:05.789 19:28:51 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:08:05.789 19:28:51 -- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]] 00:08:05.789 19:28:51 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:08:05.789 19:28:51 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:08:05.789 19:28:51 -- common/autotest_common.sh@255 -- # lcov_opt= 00:08:05.789 19:28:51 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:08:05.789 19:28:51 -- common/autotest_common.sh@259 -- # export valgrind= 00:08:05.789 19:28:51 -- common/autotest_common.sh@259 -- # valgrind= 00:08:05.789 19:28:51 -- common/autotest_common.sh@265 -- # uname -s 00:08:05.789 19:28:51 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:08:05.789 19:28:51 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:08:05.789 19:28:51 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:08:05.789 19:28:51 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:08:05.789 19:28:51 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:08:05.789 19:28:51 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:08:05.789 19:28:51 -- common/autotest_common.sh@275 -- # MAKE=make 00:08:05.789 19:28:51 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j10 00:08:05.790 19:28:51 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:08:05.790 19:28:51 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:08:05.790 19:28:51 -- common/autotest_common.sh@294 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:08:05.790 19:28:51 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:08:05.790 19:28:51 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:08:05.790 19:28:51 -- common/autotest_common.sh@301 -- # for i in "$@" 00:08:05.790 19:28:51 -- common/autotest_common.sh@302 -- # case "$i" in 00:08:05.790 19:28:51 -- common/autotest_common.sh@307 -- # TEST_TRANSPORT=tcp 00:08:05.790 19:28:51 -- common/autotest_common.sh@319 -- # [[ -z 72098 ]] 00:08:05.790 19:28:51 -- common/autotest_common.sh@319 -- # kill -0 72098 00:08:05.790 19:28:51 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:08:05.790 19:28:51 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:08:05.790 19:28:51 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:08:05.790 19:28:51 -- common/autotest_common.sh@332 -- # local mount target_dir 00:08:05.790 19:28:51 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:08:05.790 19:28:51 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:08:05.790 19:28:51 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:08:05.790 19:28:51 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:08:05.790 19:28:51 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.W5cBtu 00:08:05.790 19:28:51 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:05.790 19:28:51 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:08:05.790 19:28:51 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:08:05.790 19:28:51 -- common/autotest_common.sh@356 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.W5cBtu/tests/target /tmp/spdk.W5cBtu 00:08:05.790 19:28:51 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:08:05.790 19:28:51 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:05.790 19:28:51 -- common/autotest_common.sh@328 -- # df -T 00:08:05.790 19:28:51 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:08:05.790 19:28:51 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda5 00:08:05.790 19:28:51 -- common/autotest_common.sh@362 -- # fss["$mount"]=btrfs 00:08:05.790 19:28:51 -- common/autotest_common.sh@363 -- # avails["$mount"]=13431644160 00:08:05.790 19:28:51 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20314062848 00:08:05.790 19:28:51 -- common/autotest_common.sh@364 -- # uses["$mount"]=6150529024 00:08:05.790 19:28:51 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:05.790 19:28:51 -- common/autotest_common.sh@362 -- # mounts["$mount"]=devtmpfs 00:08:05.790 19:28:51 -- common/autotest_common.sh@362 -- # fss["$mount"]=devtmpfs 00:08:05.790 19:28:51 -- common/autotest_common.sh@363 -- # avails["$mount"]=4194304 00:08:05.790 19:28:51 -- common/autotest_common.sh@363 -- # sizes["$mount"]=4194304 00:08:05.790 19:28:51 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:08:05.790 19:28:51 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:05.790 19:28:51 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:05.790 19:28:51 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:05.790 19:28:51 -- common/autotest_common.sh@363 -- # avails["$mount"]=6265167872 00:08:05.790 19:28:51 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6266425344 00:08:05.790 19:28:51 -- common/autotest_common.sh@364 -- # uses["$mount"]=1257472 00:08:05.790 19:28:51 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:05.790 19:28:51 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:05.790 19:28:51 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:05.790 19:28:51 -- common/autotest_common.sh@363 -- # avails["$mount"]=2493755392 00:08:05.790 19:28:51 -- common/autotest_common.sh@363 -- # sizes["$mount"]=2506571776 00:08:05.790 19:28:51 -- common/autotest_common.sh@364 -- # uses["$mount"]=12816384 00:08:05.790 19:28:51 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:05.790 19:28:51 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda5 00:08:05.790 19:28:51 -- common/autotest_common.sh@362 -- # fss["$mount"]=btrfs 00:08:05.790 19:28:51 -- common/autotest_common.sh@363 -- # avails["$mount"]=13431644160 00:08:05.790 19:28:51 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20314062848 00:08:05.790 19:28:51 -- common/autotest_common.sh@364 -- # uses["$mount"]=6150529024 00:08:05.790 19:28:51 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:05.790 19:28:51 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda2 00:08:05.790 19:28:51 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext4 00:08:05.790 19:28:51 -- common/autotest_common.sh@363 -- # avails["$mount"]=840085504 00:08:05.790 19:28:51 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1012768768 00:08:05.790 19:28:51 -- common/autotest_common.sh@364 -- # uses["$mount"]=103477248 00:08:05.790 19:28:51 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:05.790 19:28:51 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:05.790 19:28:51 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:05.790 19:28:51 -- common/autotest_common.sh@363 -- # avails["$mount"]=6266286080 00:08:05.790 19:28:51 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6266425344 00:08:05.790 19:28:51 -- common/autotest_common.sh@364 -- # uses["$mount"]=139264 00:08:05.790 19:28:51 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:05.790 19:28:51 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda3 00:08:05.790 19:28:51 -- common/autotest_common.sh@362 -- # fss["$mount"]=vfat 00:08:05.790 19:28:51 -- common/autotest_common.sh@363 -- # avails["$mount"]=91617280 00:08:05.790 19:28:51 -- common/autotest_common.sh@363 -- # sizes["$mount"]=104607744 00:08:05.790 19:28:51 -- common/autotest_common.sh@364 -- # uses["$mount"]=12990464 00:08:05.790 19:28:51 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:05.790 19:28:51 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:05.790 19:28:51 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:05.790 19:28:51 -- common/autotest_common.sh@363 -- # avails["$mount"]=1253269504 00:08:05.790 19:28:51 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1253281792 00:08:05.790 19:28:51 -- common/autotest_common.sh@364 -- # uses["$mount"]=12288 00:08:05.790 19:28:51 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:05.790 19:28:51 -- common/autotest_common.sh@362 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output 00:08:05.790 19:28:51 -- common/autotest_common.sh@362 -- # fss["$mount"]=fuse.sshfs 00:08:05.790 19:28:51 -- common/autotest_common.sh@363 -- # avails["$mount"]=97245495296 00:08:05.790 19:28:51 -- common/autotest_common.sh@363 -- # sizes["$mount"]=105088212992 00:08:05.790 19:28:51 -- common/autotest_common.sh@364 -- # uses["$mount"]=2457284608 00:08:05.790 19:28:51 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:05.790 19:28:51 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:08:05.790 * Looking for test storage... 00:08:05.791 19:28:51 -- common/autotest_common.sh@369 -- # local target_space new_size 00:08:05.791 19:28:51 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:08:05.791 19:28:51 -- common/autotest_common.sh@373 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:05.791 19:28:51 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:05.791 19:28:51 -- common/autotest_common.sh@373 -- # mount=/home 00:08:05.791 19:28:51 -- common/autotest_common.sh@375 -- # target_space=13431644160 00:08:05.791 19:28:51 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:08:05.791 19:28:51 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:08:05.791 19:28:51 -- common/autotest_common.sh@381 -- # [[ btrfs == tmpfs ]] 00:08:05.791 19:28:51 -- common/autotest_common.sh@381 -- # [[ btrfs == ramfs ]] 00:08:05.791 19:28:51 -- common/autotest_common.sh@381 -- # [[ /home == / ]] 00:08:05.791 19:28:51 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:05.791 19:28:51 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:05.791 19:28:51 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:05.791 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:05.791 19:28:51 -- common/autotest_common.sh@390 -- # return 0 00:08:05.791 19:28:51 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:08:05.791 19:28:51 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:08:05.791 19:28:51 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:05.791 19:28:51 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:05.791 19:28:51 -- common/autotest_common.sh@1682 -- # true 00:08:05.791 19:28:51 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:08:05.791 19:28:51 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:05.791 19:28:51 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:05.791 19:28:51 -- common/autotest_common.sh@27 -- # exec 00:08:05.791 19:28:51 -- common/autotest_common.sh@29 -- # exec 00:08:05.791 19:28:51 -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:05.791 19:28:51 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:05.791 19:28:51 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:05.791 19:28:51 -- common/autotest_common.sh@18 -- # set -x 00:08:05.791 19:28:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:05.791 19:28:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:05.791 19:28:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:05.791 19:28:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:05.791 19:28:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:05.791 19:28:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:05.791 19:28:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:05.791 19:28:51 -- scripts/common.sh@335 -- # IFS=.-: 00:08:05.791 19:28:51 -- scripts/common.sh@335 -- # read -ra ver1 00:08:05.791 19:28:51 -- scripts/common.sh@336 -- # IFS=.-: 00:08:05.791 19:28:51 -- scripts/common.sh@336 -- # read -ra ver2 00:08:05.791 19:28:51 -- scripts/common.sh@337 -- # local 'op=<' 00:08:05.791 19:28:51 -- scripts/common.sh@339 -- # ver1_l=2 00:08:05.791 19:28:51 -- scripts/common.sh@340 -- # ver2_l=1 00:08:05.791 19:28:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:05.791 19:28:51 -- scripts/common.sh@343 -- # case "$op" in 00:08:05.791 19:28:51 -- scripts/common.sh@344 -- # : 1 00:08:05.791 19:28:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:05.791 19:28:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:05.791 19:28:51 -- scripts/common.sh@364 -- # decimal 1 00:08:05.791 19:28:51 -- scripts/common.sh@352 -- # local d=1 00:08:05.791 19:28:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:05.791 19:28:51 -- scripts/common.sh@354 -- # echo 1 00:08:05.791 19:28:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:05.791 19:28:51 -- scripts/common.sh@365 -- # decimal 2 00:08:05.791 19:28:51 -- scripts/common.sh@352 -- # local d=2 00:08:05.791 19:28:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:05.791 19:28:51 -- scripts/common.sh@354 -- # echo 2 00:08:05.791 19:28:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:05.791 19:28:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:05.791 19:28:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:05.791 19:28:51 -- scripts/common.sh@367 -- # return 0 00:08:05.791 19:28:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:05.791 19:28:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:05.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.791 --rc genhtml_branch_coverage=1 00:08:05.791 --rc genhtml_function_coverage=1 00:08:05.791 --rc genhtml_legend=1 00:08:05.791 --rc geninfo_all_blocks=1 00:08:05.791 --rc geninfo_unexecuted_blocks=1 00:08:05.791 00:08:05.791 ' 00:08:05.791 19:28:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:05.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.791 --rc genhtml_branch_coverage=1 00:08:05.791 --rc genhtml_function_coverage=1 00:08:05.791 --rc genhtml_legend=1 00:08:05.791 --rc geninfo_all_blocks=1 00:08:05.791 --rc geninfo_unexecuted_blocks=1 00:08:05.791 00:08:05.791 ' 00:08:05.791 19:28:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:05.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.792 --rc genhtml_branch_coverage=1 00:08:05.792 --rc genhtml_function_coverage=1 00:08:05.792 --rc genhtml_legend=1 00:08:05.792 --rc geninfo_all_blocks=1 00:08:05.792 --rc geninfo_unexecuted_blocks=1 00:08:05.792 00:08:05.792 ' 00:08:05.792 19:28:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:05.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.792 --rc genhtml_branch_coverage=1 00:08:05.792 --rc genhtml_function_coverage=1 00:08:05.792 --rc genhtml_legend=1 00:08:05.792 --rc geninfo_all_blocks=1 00:08:05.792 --rc geninfo_unexecuted_blocks=1 00:08:05.792 00:08:05.792 ' 00:08:05.792 19:28:51 -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:05.792 19:28:51 -- nvmf/common.sh@7 -- # uname -s 00:08:05.792 19:28:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:05.792 19:28:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:05.792 19:28:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:05.792 19:28:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:05.792 19:28:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:05.792 19:28:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:05.792 19:28:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:05.792 19:28:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:05.792 19:28:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:05.792 19:28:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:05.792 19:28:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:08:05.792 19:28:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:08:05.792 19:28:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:05.792 19:28:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:05.792 19:28:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:05.792 19:28:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:05.792 19:28:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:05.792 19:28:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:05.792 19:28:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:05.792 19:28:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.792 19:28:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.792 19:28:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.792 19:28:51 -- paths/export.sh@5 -- # export PATH 00:08:05.792 19:28:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.792 19:28:51 -- nvmf/common.sh@46 -- # : 0 00:08:05.792 19:28:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:05.792 19:28:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:05.792 19:28:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:05.792 19:28:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:05.792 19:28:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:05.792 19:28:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:05.792 19:28:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:05.792 19:28:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:05.792 19:28:51 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:05.792 19:28:51 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:05.792 19:28:51 -- target/filesystem.sh@15 -- # nvmftestinit 00:08:05.792 19:28:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:05.792 19:28:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:05.792 19:28:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:05.792 19:28:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:05.792 19:28:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:05.792 19:28:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.792 19:28:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:05.792 19:28:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:05.792 19:28:51 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:05.792 19:28:51 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:05.792 19:28:51 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:05.792 19:28:51 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:05.792 19:28:51 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:05.792 19:28:51 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:05.792 19:28:51 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:05.792 19:28:51 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:05.792 19:28:51 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:05.792 19:28:51 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:05.792 19:28:51 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:05.792 19:28:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:05.792 19:28:51 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:05.792 19:28:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:05.792 19:28:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:05.792 19:28:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:05.792 19:28:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:05.792 19:28:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:05.792 19:28:51 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:05.792 19:28:51 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:05.792 Cannot find device "nvmf_tgt_br" 00:08:05.792 19:28:51 -- nvmf/common.sh@154 -- # true 00:08:05.793 19:28:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:05.793 Cannot find device "nvmf_tgt_br2" 00:08:05.793 19:28:51 -- nvmf/common.sh@155 -- # true 00:08:05.793 19:28:51 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:05.793 19:28:51 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:05.793 Cannot find device "nvmf_tgt_br" 00:08:05.793 19:28:51 -- nvmf/common.sh@157 -- # true 00:08:05.793 19:28:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:05.793 Cannot find device "nvmf_tgt_br2" 00:08:05.793 19:28:51 -- nvmf/common.sh@158 -- # true 00:08:05.793 19:28:51 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:05.793 19:28:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:05.793 19:28:51 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:05.793 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:05.793 19:28:51 -- nvmf/common.sh@161 -- # true 00:08:05.793 19:28:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:05.793 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:05.793 19:28:51 -- nvmf/common.sh@162 -- # true 00:08:05.793 19:28:51 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:05.793 19:28:51 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:05.793 19:28:51 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:05.793 19:28:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:05.793 19:28:51 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:05.793 19:28:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:05.793 19:28:51 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:05.793 19:28:51 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:05.793 19:28:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:05.793 19:28:51 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:05.793 19:28:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:05.793 19:28:51 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:05.793 19:28:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:05.793 19:28:51 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:05.793 19:28:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:05.793 19:28:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:05.793 19:28:51 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:05.793 19:28:51 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:05.793 19:28:51 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:05.793 19:28:51 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:05.793 19:28:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:05.793 19:28:51 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:05.793 19:28:51 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:05.793 19:28:51 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:05.793 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:05.793 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:08:05.793 00:08:05.793 --- 10.0.0.2 ping statistics --- 00:08:05.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.793 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:08:05.793 19:28:51 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:05.793 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:05.793 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:08:05.793 00:08:05.793 --- 10.0.0.3 ping statistics --- 00:08:05.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.793 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:08:05.793 19:28:51 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:05.793 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:05.793 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:08:05.793 00:08:05.793 --- 10.0.0.1 ping statistics --- 00:08:05.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.793 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:08:05.793 19:28:51 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:05.793 19:28:51 -- nvmf/common.sh@421 -- # return 0 00:08:05.793 19:28:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:05.793 19:28:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:05.793 19:28:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:05.793 19:28:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:05.793 19:28:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:05.793 19:28:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:05.793 19:28:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:05.793 19:28:51 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:05.793 19:28:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:05.793 19:28:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:05.793 19:28:51 -- common/autotest_common.sh@10 -- # set +x 00:08:05.793 ************************************ 00:08:05.793 START TEST nvmf_filesystem_no_in_capsule 00:08:05.793 ************************************ 00:08:05.793 19:28:51 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 0 00:08:05.793 19:28:51 -- target/filesystem.sh@47 -- # in_capsule=0 00:08:05.793 19:28:51 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:05.793 19:28:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:05.793 19:28:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:05.793 19:28:51 -- common/autotest_common.sh@10 -- # set +x 00:08:05.793 19:28:51 -- nvmf/common.sh@469 -- # nvmfpid=72277 00:08:05.793 19:28:51 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:05.793 19:28:51 -- nvmf/common.sh@470 -- # waitforlisten 72277 00:08:05.793 19:28:51 -- common/autotest_common.sh@829 -- # '[' -z 72277 ']' 00:08:05.793 19:28:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.793 19:28:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:05.793 19:28:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.793 19:28:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:05.793 19:28:51 -- common/autotest_common.sh@10 -- # set +x 00:08:05.793 [2024-12-15 19:28:51.847447] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:05.793 [2024-12-15 19:28:51.847539] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:05.793 [2024-12-15 19:28:51.991861] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:05.794 [2024-12-15 19:28:52.086501] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:05.794 [2024-12-15 19:28:52.086701] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:05.794 [2024-12-15 19:28:52.086719] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:05.794 [2024-12-15 19:28:52.086742] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:05.794 [2024-12-15 19:28:52.086973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.794 [2024-12-15 19:28:52.087062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:05.794 [2024-12-15 19:28:52.088092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:05.794 [2024-12-15 19:28:52.088114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.053 19:28:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:06.053 19:28:52 -- common/autotest_common.sh@862 -- # return 0 00:08:06.053 19:28:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:06.053 19:28:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:06.053 19:28:52 -- common/autotest_common.sh@10 -- # set +x 00:08:06.311 19:28:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:06.311 19:28:52 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:06.311 19:28:52 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:06.311 19:28:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.311 19:28:52 -- common/autotest_common.sh@10 -- # set +x 00:08:06.311 [2024-12-15 19:28:52.992695] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:06.311 19:28:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.311 19:28:53 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:06.311 19:28:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.311 19:28:53 -- common/autotest_common.sh@10 -- # set +x 00:08:06.570 Malloc1 00:08:06.570 19:28:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.570 19:28:53 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:06.570 19:28:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.570 19:28:53 -- common/autotest_common.sh@10 -- # set +x 00:08:06.570 19:28:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.570 19:28:53 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:06.570 19:28:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.570 19:28:53 -- common/autotest_common.sh@10 -- # set +x 00:08:06.570 19:28:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.571 19:28:53 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:06.571 19:28:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.571 19:28:53 -- common/autotest_common.sh@10 -- # set +x 00:08:06.571 [2024-12-15 19:28:53.251336] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:06.571 19:28:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.571 19:28:53 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:06.571 19:28:53 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:08:06.571 19:28:53 -- common/autotest_common.sh@1368 -- # local bdev_info 00:08:06.571 19:28:53 -- common/autotest_common.sh@1369 -- # local bs 00:08:06.571 19:28:53 -- common/autotest_common.sh@1370 -- # local nb 00:08:06.571 19:28:53 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:06.571 19:28:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.571 19:28:53 -- common/autotest_common.sh@10 -- # set +x 00:08:06.571 19:28:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.571 19:28:53 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:08:06.571 { 00:08:06.571 "aliases": [ 00:08:06.571 "be665339-aabc-4194-895c-a6a483bc1f74" 00:08:06.571 ], 00:08:06.571 "assigned_rate_limits": { 00:08:06.571 "r_mbytes_per_sec": 0, 00:08:06.571 "rw_ios_per_sec": 0, 00:08:06.571 "rw_mbytes_per_sec": 0, 00:08:06.571 "w_mbytes_per_sec": 0 00:08:06.571 }, 00:08:06.571 "block_size": 512, 00:08:06.571 "claim_type": "exclusive_write", 00:08:06.571 "claimed": true, 00:08:06.571 "driver_specific": {}, 00:08:06.571 "memory_domains": [ 00:08:06.571 { 00:08:06.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.571 "dma_device_type": 2 00:08:06.571 } 00:08:06.571 ], 00:08:06.571 "name": "Malloc1", 00:08:06.571 "num_blocks": 1048576, 00:08:06.571 "product_name": "Malloc disk", 00:08:06.571 "supported_io_types": { 00:08:06.571 "abort": true, 00:08:06.571 "compare": false, 00:08:06.571 "compare_and_write": false, 00:08:06.571 "flush": true, 00:08:06.571 "nvme_admin": false, 00:08:06.571 "nvme_io": false, 00:08:06.571 "read": true, 00:08:06.571 "reset": true, 00:08:06.571 "unmap": true, 00:08:06.571 "write": true, 00:08:06.571 "write_zeroes": true 00:08:06.571 }, 00:08:06.571 "uuid": "be665339-aabc-4194-895c-a6a483bc1f74", 00:08:06.571 "zoned": false 00:08:06.571 } 00:08:06.571 ]' 00:08:06.571 19:28:53 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:08:06.571 19:28:53 -- common/autotest_common.sh@1372 -- # bs=512 00:08:06.571 19:28:53 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:08:06.571 19:28:53 -- common/autotest_common.sh@1373 -- # nb=1048576 00:08:06.571 19:28:53 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:08:06.571 19:28:53 -- common/autotest_common.sh@1377 -- # echo 512 00:08:06.571 19:28:53 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:06.571 19:28:53 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 --hostid=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:06.829 19:28:53 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:06.829 19:28:53 -- common/autotest_common.sh@1187 -- # local i=0 00:08:06.829 19:28:53 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:08:06.829 19:28:53 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:08:06.829 19:28:53 -- common/autotest_common.sh@1194 -- # sleep 2 00:08:08.729 19:28:55 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:08:08.729 19:28:55 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:08:08.729 19:28:55 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:08:08.729 19:28:55 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:08:08.729 19:28:55 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:08:08.729 19:28:55 -- common/autotest_common.sh@1197 -- # return 0 00:08:08.729 19:28:55 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:08.729 19:28:55 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:08.729 19:28:55 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:08.729 19:28:55 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:08.729 19:28:55 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:08.729 19:28:55 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:08.729 19:28:55 -- setup/common.sh@80 -- # echo 536870912 00:08:08.730 19:28:55 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:08.730 19:28:55 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:08.730 19:28:55 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:08.730 19:28:55 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:08.987 19:28:55 -- target/filesystem.sh@69 -- # partprobe 00:08:08.987 19:28:55 -- target/filesystem.sh@70 -- # sleep 1 00:08:09.922 19:28:56 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:09.922 19:28:56 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:09.922 19:28:56 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:09.922 19:28:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:09.922 19:28:56 -- common/autotest_common.sh@10 -- # set +x 00:08:09.922 ************************************ 00:08:09.922 START TEST filesystem_ext4 00:08:09.922 ************************************ 00:08:09.922 19:28:56 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:09.922 19:28:56 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:09.922 19:28:56 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:09.923 19:28:56 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:09.923 19:28:56 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:09.923 19:28:56 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:09.923 19:28:56 -- common/autotest_common.sh@914 -- # local i=0 00:08:09.923 19:28:56 -- common/autotest_common.sh@915 -- # local force 00:08:09.923 19:28:56 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:09.923 19:28:56 -- common/autotest_common.sh@918 -- # force=-F 00:08:09.923 19:28:56 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:09.923 mke2fs 1.47.0 (5-Feb-2023) 00:08:10.181 Discarding device blocks: 0/522240 done 00:08:10.181 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:10.181 Filesystem UUID: d3771a9e-1c13-4b0a-9c0f-d90753ac6417 00:08:10.181 Superblock backups stored on blocks: 00:08:10.181 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:10.181 00:08:10.181 Allocating group tables: 0/64 done 00:08:10.181 Writing inode tables: 0/64 done 00:08:10.181 Creating journal (8192 blocks): done 00:08:10.181 Writing superblocks and filesystem accounting information: 0/64 done 00:08:10.181 00:08:10.181 19:28:56 -- common/autotest_common.sh@931 -- # return 0 00:08:10.181 19:28:56 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:15.448 19:29:02 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:15.448 19:29:02 -- target/filesystem.sh@25 -- # sync 00:08:15.707 19:29:02 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:15.707 19:29:02 -- target/filesystem.sh@27 -- # sync 00:08:15.707 19:29:02 -- target/filesystem.sh@29 -- # i=0 00:08:15.707 19:29:02 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:15.707 19:29:02 -- target/filesystem.sh@37 -- # kill -0 72277 00:08:15.707 19:29:02 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:15.707 19:29:02 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:15.707 19:29:02 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:15.707 19:29:02 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:15.707 00:08:15.707 real 0m5.637s 00:08:15.707 user 0m0.030s 00:08:15.707 sys 0m0.063s 00:08:15.707 19:29:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:15.707 19:29:02 -- common/autotest_common.sh@10 -- # set +x 00:08:15.707 ************************************ 00:08:15.707 END TEST filesystem_ext4 00:08:15.707 ************************************ 00:08:15.707 19:29:02 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:15.707 19:29:02 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:15.707 19:29:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:15.707 19:29:02 -- common/autotest_common.sh@10 -- # set +x 00:08:15.707 ************************************ 00:08:15.707 START TEST filesystem_btrfs 00:08:15.707 ************************************ 00:08:15.707 19:29:02 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:15.707 19:29:02 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:15.707 19:29:02 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:15.707 19:29:02 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:15.707 19:29:02 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:15.707 19:29:02 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:15.707 19:29:02 -- common/autotest_common.sh@914 -- # local i=0 00:08:15.707 19:29:02 -- common/autotest_common.sh@915 -- # local force 00:08:15.707 19:29:02 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:15.707 19:29:02 -- common/autotest_common.sh@920 -- # force=-f 00:08:15.707 19:29:02 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:15.965 btrfs-progs v6.8.1 00:08:15.965 See https://btrfs.readthedocs.io for more information. 00:08:15.965 00:08:15.965 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:15.965 NOTE: several default settings have changed in version 5.15, please make sure 00:08:15.965 this does not affect your deployments: 00:08:15.965 - DUP for metadata (-m dup) 00:08:15.965 - enabled no-holes (-O no-holes) 00:08:15.965 - enabled free-space-tree (-R free-space-tree) 00:08:15.965 00:08:15.965 Label: (null) 00:08:15.965 UUID: de4703c8-1563-49a5-a853-02df32a626cc 00:08:15.965 Node size: 16384 00:08:15.965 Sector size: 4096 (CPU page size: 4096) 00:08:15.965 Filesystem size: 510.00MiB 00:08:15.965 Block group profiles: 00:08:15.965 Data: single 8.00MiB 00:08:15.966 Metadata: DUP 32.00MiB 00:08:15.966 System: DUP 8.00MiB 00:08:15.966 SSD detected: yes 00:08:15.966 Zoned device: no 00:08:15.966 Features: extref, skinny-metadata, no-holes, free-space-tree 00:08:15.966 Checksum: crc32c 00:08:15.966 Number of devices: 1 00:08:15.966 Devices: 00:08:15.966 ID SIZE PATH 00:08:15.966 1 510.00MiB /dev/nvme0n1p1 00:08:15.966 00:08:15.966 19:29:02 -- common/autotest_common.sh@931 -- # return 0 00:08:15.966 19:29:02 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:15.966 19:29:02 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:15.966 19:29:02 -- target/filesystem.sh@25 -- # sync 00:08:15.966 19:29:02 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:15.966 19:29:02 -- target/filesystem.sh@27 -- # sync 00:08:15.966 19:29:02 -- target/filesystem.sh@29 -- # i=0 00:08:15.966 19:29:02 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:15.966 19:29:02 -- target/filesystem.sh@37 -- # kill -0 72277 00:08:15.966 19:29:02 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:15.966 19:29:02 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:15.966 19:29:02 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:15.966 19:29:02 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:15.966 00:08:15.966 real 0m0.329s 00:08:15.966 user 0m0.027s 00:08:15.966 sys 0m0.067s 00:08:15.966 19:29:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:15.966 19:29:02 -- common/autotest_common.sh@10 -- # set +x 00:08:15.966 ************************************ 00:08:15.966 END TEST filesystem_btrfs 00:08:15.966 ************************************ 00:08:15.966 19:29:02 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:15.966 19:29:02 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:15.966 19:29:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:15.966 19:29:02 -- common/autotest_common.sh@10 -- # set +x 00:08:15.966 ************************************ 00:08:15.966 START TEST filesystem_xfs 00:08:15.966 ************************************ 00:08:15.966 19:29:02 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:08:15.966 19:29:02 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:15.966 19:29:02 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:15.966 19:29:02 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:15.966 19:29:02 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:15.966 19:29:02 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:15.966 19:29:02 -- common/autotest_common.sh@914 -- # local i=0 00:08:15.966 19:29:02 -- common/autotest_common.sh@915 -- # local force 00:08:15.966 19:29:02 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:15.966 19:29:02 -- common/autotest_common.sh@920 -- # force=-f 00:08:15.966 19:29:02 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:16.224 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:16.224 = sectsz=512 attr=2, projid32bit=1 00:08:16.224 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:16.224 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:16.224 data = bsize=4096 blocks=130560, imaxpct=25 00:08:16.224 = sunit=0 swidth=0 blks 00:08:16.224 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:16.224 log =internal log bsize=4096 blocks=16384, version=2 00:08:16.224 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:16.224 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:17.160 Discarding blocks...Done. 00:08:17.160 19:29:03 -- common/autotest_common.sh@931 -- # return 0 00:08:17.160 19:29:03 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:19.693 19:29:06 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:19.693 19:29:06 -- target/filesystem.sh@25 -- # sync 00:08:19.693 19:29:06 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:19.693 19:29:06 -- target/filesystem.sh@27 -- # sync 00:08:19.693 19:29:06 -- target/filesystem.sh@29 -- # i=0 00:08:19.693 19:29:06 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:19.693 19:29:06 -- target/filesystem.sh@37 -- # kill -0 72277 00:08:19.693 19:29:06 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:19.693 19:29:06 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:19.693 19:29:06 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:19.693 19:29:06 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:19.693 00:08:19.693 real 0m3.259s 00:08:19.693 user 0m0.022s 00:08:19.693 sys 0m0.059s 00:08:19.693 19:29:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:19.693 19:29:06 -- common/autotest_common.sh@10 -- # set +x 00:08:19.693 ************************************ 00:08:19.693 END TEST filesystem_xfs 00:08:19.693 ************************************ 00:08:19.693 19:29:06 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:19.693 19:29:06 -- target/filesystem.sh@93 -- # sync 00:08:19.693 19:29:06 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:19.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:19.693 19:29:06 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:19.693 19:29:06 -- common/autotest_common.sh@1208 -- # local i=0 00:08:19.693 19:29:06 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:19.693 19:29:06 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:08:19.693 19:29:06 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:19.693 19:29:06 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:08:19.693 19:29:06 -- common/autotest_common.sh@1220 -- # return 0 00:08:19.693 19:29:06 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:19.693 19:29:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.693 19:29:06 -- common/autotest_common.sh@10 -- # set +x 00:08:19.693 19:29:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.693 19:29:06 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:19.693 19:29:06 -- target/filesystem.sh@101 -- # killprocess 72277 00:08:19.693 19:29:06 -- common/autotest_common.sh@936 -- # '[' -z 72277 ']' 00:08:19.693 19:29:06 -- common/autotest_common.sh@940 -- # kill -0 72277 00:08:19.693 19:29:06 -- common/autotest_common.sh@941 -- # uname 00:08:19.693 19:29:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:19.693 19:29:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72277 00:08:19.693 19:29:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:19.693 19:29:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:19.693 killing process with pid 72277 00:08:19.693 19:29:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72277' 00:08:19.693 19:29:06 -- common/autotest_common.sh@955 -- # kill 72277 00:08:19.693 19:29:06 -- common/autotest_common.sh@960 -- # wait 72277 00:08:20.260 19:29:06 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:20.260 00:08:20.260 real 0m15.107s 00:08:20.260 user 0m57.459s 00:08:20.260 sys 0m2.441s 00:08:20.260 19:29:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:20.261 ************************************ 00:08:20.261 END TEST nvmf_filesystem_no_in_capsule 00:08:20.261 ************************************ 00:08:20.261 19:29:06 -- common/autotest_common.sh@10 -- # set +x 00:08:20.261 19:29:06 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:20.261 19:29:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:20.261 19:29:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:20.261 19:29:06 -- common/autotest_common.sh@10 -- # set +x 00:08:20.261 ************************************ 00:08:20.261 START TEST nvmf_filesystem_in_capsule 00:08:20.261 ************************************ 00:08:20.261 19:29:06 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 4096 00:08:20.261 19:29:06 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:20.261 19:29:06 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:20.261 19:29:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:20.261 19:29:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:20.261 19:29:06 -- common/autotest_common.sh@10 -- # set +x 00:08:20.261 19:29:06 -- nvmf/common.sh@469 -- # nvmfpid=72655 00:08:20.261 19:29:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:20.261 19:29:06 -- nvmf/common.sh@470 -- # waitforlisten 72655 00:08:20.261 19:29:06 -- common/autotest_common.sh@829 -- # '[' -z 72655 ']' 00:08:20.261 19:29:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.261 19:29:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:20.261 19:29:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.261 19:29:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:20.261 19:29:06 -- common/autotest_common.sh@10 -- # set +x 00:08:20.261 [2024-12-15 19:29:07.002802] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:20.261 [2024-12-15 19:29:07.002884] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:20.261 [2024-12-15 19:29:07.134923] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:20.520 [2024-12-15 19:29:07.231659] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:20.520 [2024-12-15 19:29:07.231789] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:20.520 [2024-12-15 19:29:07.231802] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:20.520 [2024-12-15 19:29:07.231861] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:20.520 [2024-12-15 19:29:07.231992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.520 [2024-12-15 19:29:07.232056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:20.520 [2024-12-15 19:29:07.232838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.520 [2024-12-15 19:29:07.233382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:21.456 19:29:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:21.456 19:29:08 -- common/autotest_common.sh@862 -- # return 0 00:08:21.456 19:29:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:21.456 19:29:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:21.456 19:29:08 -- common/autotest_common.sh@10 -- # set +x 00:08:21.456 19:29:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:21.456 19:29:08 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:21.456 19:29:08 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:21.456 19:29:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.456 19:29:08 -- common/autotest_common.sh@10 -- # set +x 00:08:21.456 [2024-12-15 19:29:08.068888] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:21.456 19:29:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.456 19:29:08 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:21.456 19:29:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.456 19:29:08 -- common/autotest_common.sh@10 -- # set +x 00:08:21.456 Malloc1 00:08:21.456 19:29:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.456 19:29:08 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:21.456 19:29:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.456 19:29:08 -- common/autotest_common.sh@10 -- # set +x 00:08:21.456 19:29:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.456 19:29:08 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:21.456 19:29:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.456 19:29:08 -- common/autotest_common.sh@10 -- # set +x 00:08:21.456 19:29:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.456 19:29:08 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:21.456 19:29:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.456 19:29:08 -- common/autotest_common.sh@10 -- # set +x 00:08:21.456 [2024-12-15 19:29:08.313751] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:21.456 19:29:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.456 19:29:08 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:21.456 19:29:08 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:08:21.456 19:29:08 -- common/autotest_common.sh@1368 -- # local bdev_info 00:08:21.456 19:29:08 -- common/autotest_common.sh@1369 -- # local bs 00:08:21.456 19:29:08 -- common/autotest_common.sh@1370 -- # local nb 00:08:21.456 19:29:08 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:21.456 19:29:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.456 19:29:08 -- common/autotest_common.sh@10 -- # set +x 00:08:21.456 19:29:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.456 19:29:08 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:08:21.456 { 00:08:21.456 "aliases": [ 00:08:21.456 "358d7ff4-5493-46d8-9499-aa54d5c55c11" 00:08:21.456 ], 00:08:21.456 "assigned_rate_limits": { 00:08:21.456 "r_mbytes_per_sec": 0, 00:08:21.456 "rw_ios_per_sec": 0, 00:08:21.456 "rw_mbytes_per_sec": 0, 00:08:21.456 "w_mbytes_per_sec": 0 00:08:21.456 }, 00:08:21.456 "block_size": 512, 00:08:21.456 "claim_type": "exclusive_write", 00:08:21.456 "claimed": true, 00:08:21.456 "driver_specific": {}, 00:08:21.456 "memory_domains": [ 00:08:21.456 { 00:08:21.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.456 "dma_device_type": 2 00:08:21.456 } 00:08:21.456 ], 00:08:21.456 "name": "Malloc1", 00:08:21.456 "num_blocks": 1048576, 00:08:21.456 "product_name": "Malloc disk", 00:08:21.456 "supported_io_types": { 00:08:21.456 "abort": true, 00:08:21.456 "compare": false, 00:08:21.456 "compare_and_write": false, 00:08:21.456 "flush": true, 00:08:21.456 "nvme_admin": false, 00:08:21.456 "nvme_io": false, 00:08:21.456 "read": true, 00:08:21.456 "reset": true, 00:08:21.456 "unmap": true, 00:08:21.456 "write": true, 00:08:21.456 "write_zeroes": true 00:08:21.456 }, 00:08:21.456 "uuid": "358d7ff4-5493-46d8-9499-aa54d5c55c11", 00:08:21.456 "zoned": false 00:08:21.456 } 00:08:21.456 ]' 00:08:21.456 19:29:08 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:08:21.715 19:29:08 -- common/autotest_common.sh@1372 -- # bs=512 00:08:21.715 19:29:08 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:08:21.715 19:29:08 -- common/autotest_common.sh@1373 -- # nb=1048576 00:08:21.715 19:29:08 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:08:21.715 19:29:08 -- common/autotest_common.sh@1377 -- # echo 512 00:08:21.715 19:29:08 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:21.715 19:29:08 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 --hostid=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:21.974 19:29:08 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:21.974 19:29:08 -- common/autotest_common.sh@1187 -- # local i=0 00:08:21.974 19:29:08 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:08:21.974 19:29:08 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:08:21.974 19:29:08 -- common/autotest_common.sh@1194 -- # sleep 2 00:08:23.878 19:29:10 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:08:23.878 19:29:10 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:08:23.878 19:29:10 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:08:23.878 19:29:10 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:08:23.878 19:29:10 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:08:23.878 19:29:10 -- common/autotest_common.sh@1197 -- # return 0 00:08:23.878 19:29:10 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:23.878 19:29:10 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:23.878 19:29:10 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:23.878 19:29:10 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:23.878 19:29:10 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:23.878 19:29:10 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:23.878 19:29:10 -- setup/common.sh@80 -- # echo 536870912 00:08:23.878 19:29:10 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:23.878 19:29:10 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:23.878 19:29:10 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:23.878 19:29:10 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:23.878 19:29:10 -- target/filesystem.sh@69 -- # partprobe 00:08:23.878 19:29:10 -- target/filesystem.sh@70 -- # sleep 1 00:08:25.254 19:29:11 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:25.254 19:29:11 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:25.254 19:29:11 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:25.254 19:29:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:25.254 19:29:11 -- common/autotest_common.sh@10 -- # set +x 00:08:25.254 ************************************ 00:08:25.254 START TEST filesystem_in_capsule_ext4 00:08:25.254 ************************************ 00:08:25.254 19:29:11 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:25.254 19:29:11 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:25.254 19:29:11 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:25.254 19:29:11 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:25.254 19:29:11 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:25.254 19:29:11 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:25.254 19:29:11 -- common/autotest_common.sh@914 -- # local i=0 00:08:25.254 19:29:11 -- common/autotest_common.sh@915 -- # local force 00:08:25.254 19:29:11 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:25.254 19:29:11 -- common/autotest_common.sh@918 -- # force=-F 00:08:25.254 19:29:11 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:25.254 mke2fs 1.47.0 (5-Feb-2023) 00:08:25.254 Discarding device blocks: 0/522240 done 00:08:25.254 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:25.254 Filesystem UUID: cb52063e-c1d8-4e36-b35b-55118490785a 00:08:25.254 Superblock backups stored on blocks: 00:08:25.254 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:25.254 00:08:25.254 Allocating group tables: 0/64 done 00:08:25.254 Writing inode tables: 0/64 done 00:08:25.254 Creating journal (8192 blocks): done 00:08:25.254 Writing superblocks and filesystem accounting information: 0/64 done 00:08:25.254 00:08:25.254 19:29:11 -- common/autotest_common.sh@931 -- # return 0 00:08:25.254 19:29:11 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:30.548 19:29:17 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:30.548 19:29:17 -- target/filesystem.sh@25 -- # sync 00:08:30.548 19:29:17 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:30.548 19:29:17 -- target/filesystem.sh@27 -- # sync 00:08:30.548 19:29:17 -- target/filesystem.sh@29 -- # i=0 00:08:30.548 19:29:17 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:30.548 19:29:17 -- target/filesystem.sh@37 -- # kill -0 72655 00:08:30.548 19:29:17 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:30.548 19:29:17 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:30.548 19:29:17 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:30.548 19:29:17 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:30.548 00:08:30.548 real 0m5.675s 00:08:30.548 user 0m0.023s 00:08:30.548 sys 0m0.072s 00:08:30.548 19:29:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:30.548 19:29:17 -- common/autotest_common.sh@10 -- # set +x 00:08:30.548 ************************************ 00:08:30.548 END TEST filesystem_in_capsule_ext4 00:08:30.548 ************************************ 00:08:30.807 19:29:17 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:30.807 19:29:17 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:30.807 19:29:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:30.807 19:29:17 -- common/autotest_common.sh@10 -- # set +x 00:08:30.807 ************************************ 00:08:30.807 START TEST filesystem_in_capsule_btrfs 00:08:30.807 ************************************ 00:08:30.807 19:29:17 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:30.807 19:29:17 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:30.807 19:29:17 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:30.807 19:29:17 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:30.807 19:29:17 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:30.807 19:29:17 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:30.807 19:29:17 -- common/autotest_common.sh@914 -- # local i=0 00:08:30.807 19:29:17 -- common/autotest_common.sh@915 -- # local force 00:08:30.807 19:29:17 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:30.807 19:29:17 -- common/autotest_common.sh@920 -- # force=-f 00:08:30.807 19:29:17 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:30.807 btrfs-progs v6.8.1 00:08:30.807 See https://btrfs.readthedocs.io for more information. 00:08:30.807 00:08:30.807 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:30.807 NOTE: several default settings have changed in version 5.15, please make sure 00:08:30.807 this does not affect your deployments: 00:08:30.807 - DUP for metadata (-m dup) 00:08:30.807 - enabled no-holes (-O no-holes) 00:08:30.807 - enabled free-space-tree (-R free-space-tree) 00:08:30.807 00:08:30.807 Label: (null) 00:08:30.807 UUID: 0ea7d35b-d4f7-4bb4-9b91-3b131fca7c4a 00:08:30.807 Node size: 16384 00:08:30.807 Sector size: 4096 (CPU page size: 4096) 00:08:30.807 Filesystem size: 510.00MiB 00:08:30.807 Block group profiles: 00:08:30.807 Data: single 8.00MiB 00:08:30.807 Metadata: DUP 32.00MiB 00:08:30.807 System: DUP 8.00MiB 00:08:30.807 SSD detected: yes 00:08:30.807 Zoned device: no 00:08:30.807 Features: extref, skinny-metadata, no-holes, free-space-tree 00:08:30.807 Checksum: crc32c 00:08:30.807 Number of devices: 1 00:08:30.807 Devices: 00:08:30.807 ID SIZE PATH 00:08:30.807 1 510.00MiB /dev/nvme0n1p1 00:08:30.807 00:08:30.807 19:29:17 -- common/autotest_common.sh@931 -- # return 0 00:08:30.807 19:29:17 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:31.066 19:29:17 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:31.066 19:29:17 -- target/filesystem.sh@25 -- # sync 00:08:31.066 19:29:17 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:31.066 19:29:17 -- target/filesystem.sh@27 -- # sync 00:08:31.066 19:29:17 -- target/filesystem.sh@29 -- # i=0 00:08:31.066 19:29:17 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:31.066 19:29:17 -- target/filesystem.sh@37 -- # kill -0 72655 00:08:31.066 19:29:17 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:31.066 19:29:17 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:31.066 19:29:17 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:31.066 19:29:17 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:31.066 00:08:31.066 real 0m0.286s 00:08:31.066 user 0m0.015s 00:08:31.066 sys 0m0.077s 00:08:31.066 19:29:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:31.066 19:29:17 -- common/autotest_common.sh@10 -- # set +x 00:08:31.066 ************************************ 00:08:31.066 END TEST filesystem_in_capsule_btrfs 00:08:31.066 ************************************ 00:08:31.066 19:29:17 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:31.066 19:29:17 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:31.066 19:29:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:31.066 19:29:17 -- common/autotest_common.sh@10 -- # set +x 00:08:31.066 ************************************ 00:08:31.066 START TEST filesystem_in_capsule_xfs 00:08:31.066 ************************************ 00:08:31.066 19:29:17 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:08:31.066 19:29:17 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:31.066 19:29:17 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:31.066 19:29:17 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:31.066 19:29:17 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:31.066 19:29:17 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:31.066 19:29:17 -- common/autotest_common.sh@914 -- # local i=0 00:08:31.066 19:29:17 -- common/autotest_common.sh@915 -- # local force 00:08:31.066 19:29:17 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:31.066 19:29:17 -- common/autotest_common.sh@920 -- # force=-f 00:08:31.066 19:29:17 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:31.324 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:31.324 = sectsz=512 attr=2, projid32bit=1 00:08:31.324 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:31.324 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:31.324 data = bsize=4096 blocks=130560, imaxpct=25 00:08:31.324 = sunit=0 swidth=0 blks 00:08:31.324 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:31.324 log =internal log bsize=4096 blocks=16384, version=2 00:08:31.324 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:31.324 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:31.891 Discarding blocks...Done. 00:08:31.891 19:29:18 -- common/autotest_common.sh@931 -- # return 0 00:08:31.891 19:29:18 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:33.793 19:29:20 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:33.793 19:29:20 -- target/filesystem.sh@25 -- # sync 00:08:33.793 19:29:20 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:33.793 19:29:20 -- target/filesystem.sh@27 -- # sync 00:08:33.793 19:29:20 -- target/filesystem.sh@29 -- # i=0 00:08:33.793 19:29:20 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:33.793 19:29:20 -- target/filesystem.sh@37 -- # kill -0 72655 00:08:33.793 19:29:20 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:33.793 19:29:20 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:33.793 19:29:20 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:33.793 19:29:20 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:33.793 00:08:33.793 real 0m2.675s 00:08:33.793 user 0m0.024s 00:08:33.793 sys 0m0.055s 00:08:33.793 19:29:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:33.793 19:29:20 -- common/autotest_common.sh@10 -- # set +x 00:08:33.793 ************************************ 00:08:33.793 END TEST filesystem_in_capsule_xfs 00:08:33.793 ************************************ 00:08:33.793 19:29:20 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:33.793 19:29:20 -- target/filesystem.sh@93 -- # sync 00:08:33.793 19:29:20 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:33.793 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.793 19:29:20 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:33.793 19:29:20 -- common/autotest_common.sh@1208 -- # local i=0 00:08:33.793 19:29:20 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:08:33.793 19:29:20 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:33.793 19:29:20 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:33.793 19:29:20 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:08:33.793 19:29:20 -- common/autotest_common.sh@1220 -- # return 0 00:08:33.793 19:29:20 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:33.793 19:29:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.793 19:29:20 -- common/autotest_common.sh@10 -- # set +x 00:08:33.793 19:29:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.793 19:29:20 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:33.793 19:29:20 -- target/filesystem.sh@101 -- # killprocess 72655 00:08:33.793 19:29:20 -- common/autotest_common.sh@936 -- # '[' -z 72655 ']' 00:08:33.793 19:29:20 -- common/autotest_common.sh@940 -- # kill -0 72655 00:08:33.793 19:29:20 -- common/autotest_common.sh@941 -- # uname 00:08:33.793 19:29:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:33.793 19:29:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72655 00:08:34.052 19:29:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:34.052 19:29:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:34.052 killing process with pid 72655 00:08:34.052 19:29:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72655' 00:08:34.052 19:29:20 -- common/autotest_common.sh@955 -- # kill 72655 00:08:34.052 19:29:20 -- common/autotest_common.sh@960 -- # wait 72655 00:08:34.619 19:29:21 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:34.619 00:08:34.619 real 0m14.349s 00:08:34.619 user 0m54.539s 00:08:34.619 sys 0m2.350s 00:08:34.619 19:29:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:34.619 ************************************ 00:08:34.619 END TEST nvmf_filesystem_in_capsule 00:08:34.619 ************************************ 00:08:34.619 19:29:21 -- common/autotest_common.sh@10 -- # set +x 00:08:34.619 19:29:21 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:34.619 19:29:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:34.619 19:29:21 -- nvmf/common.sh@116 -- # sync 00:08:34.619 19:29:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:34.619 19:29:21 -- nvmf/common.sh@119 -- # set +e 00:08:34.619 19:29:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:34.619 19:29:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:34.619 rmmod nvme_tcp 00:08:34.619 rmmod nvme_fabrics 00:08:34.619 rmmod nvme_keyring 00:08:34.619 19:29:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:34.619 19:29:21 -- nvmf/common.sh@123 -- # set -e 00:08:34.619 19:29:21 -- nvmf/common.sh@124 -- # return 0 00:08:34.619 19:29:21 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:34.619 19:29:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:34.619 19:29:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:34.619 19:29:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:34.619 19:29:21 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:34.619 19:29:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:34.619 19:29:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.619 19:29:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:34.619 19:29:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.619 19:29:21 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:34.619 00:08:34.619 real 0m30.461s 00:08:34.619 user 1m52.379s 00:08:34.619 sys 0m5.235s 00:08:34.619 19:29:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:34.619 19:29:21 -- common/autotest_common.sh@10 -- # set +x 00:08:34.619 ************************************ 00:08:34.619 END TEST nvmf_filesystem 00:08:34.619 ************************************ 00:08:34.879 19:29:21 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:34.879 19:29:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:34.879 19:29:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:34.879 19:29:21 -- common/autotest_common.sh@10 -- # set +x 00:08:34.879 ************************************ 00:08:34.879 START TEST nvmf_discovery 00:08:34.879 ************************************ 00:08:34.879 19:29:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:34.879 * Looking for test storage... 00:08:34.879 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:34.879 19:29:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:34.879 19:29:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:34.879 19:29:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:34.879 19:29:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:34.879 19:29:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:34.879 19:29:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:34.879 19:29:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:34.879 19:29:21 -- scripts/common.sh@335 -- # IFS=.-: 00:08:34.879 19:29:21 -- scripts/common.sh@335 -- # read -ra ver1 00:08:34.879 19:29:21 -- scripts/common.sh@336 -- # IFS=.-: 00:08:34.879 19:29:21 -- scripts/common.sh@336 -- # read -ra ver2 00:08:34.879 19:29:21 -- scripts/common.sh@337 -- # local 'op=<' 00:08:34.879 19:29:21 -- scripts/common.sh@339 -- # ver1_l=2 00:08:34.879 19:29:21 -- scripts/common.sh@340 -- # ver2_l=1 00:08:34.879 19:29:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:34.879 19:29:21 -- scripts/common.sh@343 -- # case "$op" in 00:08:34.879 19:29:21 -- scripts/common.sh@344 -- # : 1 00:08:34.879 19:29:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:34.879 19:29:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:34.879 19:29:21 -- scripts/common.sh@364 -- # decimal 1 00:08:34.879 19:29:21 -- scripts/common.sh@352 -- # local d=1 00:08:34.879 19:29:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:34.879 19:29:21 -- scripts/common.sh@354 -- # echo 1 00:08:34.879 19:29:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:34.879 19:29:21 -- scripts/common.sh@365 -- # decimal 2 00:08:34.879 19:29:21 -- scripts/common.sh@352 -- # local d=2 00:08:34.879 19:29:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:34.879 19:29:21 -- scripts/common.sh@354 -- # echo 2 00:08:34.879 19:29:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:34.879 19:29:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:34.879 19:29:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:34.879 19:29:21 -- scripts/common.sh@367 -- # return 0 00:08:34.879 19:29:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:34.879 19:29:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:34.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.879 --rc genhtml_branch_coverage=1 00:08:34.879 --rc genhtml_function_coverage=1 00:08:34.879 --rc genhtml_legend=1 00:08:34.879 --rc geninfo_all_blocks=1 00:08:34.879 --rc geninfo_unexecuted_blocks=1 00:08:34.879 00:08:34.879 ' 00:08:34.879 19:29:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:34.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.879 --rc genhtml_branch_coverage=1 00:08:34.879 --rc genhtml_function_coverage=1 00:08:34.879 --rc genhtml_legend=1 00:08:34.879 --rc geninfo_all_blocks=1 00:08:34.879 --rc geninfo_unexecuted_blocks=1 00:08:34.879 00:08:34.879 ' 00:08:34.879 19:29:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:34.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.879 --rc genhtml_branch_coverage=1 00:08:34.879 --rc genhtml_function_coverage=1 00:08:34.879 --rc genhtml_legend=1 00:08:34.879 --rc geninfo_all_blocks=1 00:08:34.879 --rc geninfo_unexecuted_blocks=1 00:08:34.879 00:08:34.879 ' 00:08:34.879 19:29:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:34.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.879 --rc genhtml_branch_coverage=1 00:08:34.879 --rc genhtml_function_coverage=1 00:08:34.879 --rc genhtml_legend=1 00:08:34.879 --rc geninfo_all_blocks=1 00:08:34.879 --rc geninfo_unexecuted_blocks=1 00:08:34.879 00:08:34.879 ' 00:08:34.879 19:29:21 -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:34.879 19:29:21 -- nvmf/common.sh@7 -- # uname -s 00:08:34.879 19:29:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:34.879 19:29:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:34.879 19:29:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:34.879 19:29:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:34.879 19:29:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:34.879 19:29:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:34.879 19:29:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:34.879 19:29:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:34.879 19:29:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:34.879 19:29:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:34.879 19:29:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:08:34.879 19:29:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:08:34.879 19:29:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:34.879 19:29:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:34.879 19:29:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:34.879 19:29:21 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:34.879 19:29:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:34.879 19:29:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:34.879 19:29:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:34.879 19:29:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.879 19:29:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.879 19:29:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.879 19:29:21 -- paths/export.sh@5 -- # export PATH 00:08:34.879 19:29:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.879 19:29:21 -- nvmf/common.sh@46 -- # : 0 00:08:34.879 19:29:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:34.879 19:29:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:34.879 19:29:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:34.879 19:29:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:34.879 19:29:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:34.879 19:29:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:34.879 19:29:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:34.879 19:29:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:34.879 19:29:21 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:34.879 19:29:21 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:34.879 19:29:21 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:34.879 19:29:21 -- target/discovery.sh@15 -- # hash nvme 00:08:34.879 19:29:21 -- target/discovery.sh@20 -- # nvmftestinit 00:08:34.879 19:29:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:34.879 19:29:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:34.879 19:29:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:34.879 19:29:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:34.879 19:29:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:34.879 19:29:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.879 19:29:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:34.879 19:29:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.879 19:29:21 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:34.879 19:29:21 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:34.879 19:29:21 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:34.879 19:29:21 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:34.879 19:29:21 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:34.879 19:29:21 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:34.880 19:29:21 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:34.880 19:29:21 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:34.880 19:29:21 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:34.880 19:29:21 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:34.880 19:29:21 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:34.880 19:29:21 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:34.880 19:29:21 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:34.880 19:29:21 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:34.880 19:29:21 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:34.880 19:29:21 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:34.880 19:29:21 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:34.880 19:29:21 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:34.880 19:29:21 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:34.880 19:29:21 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:35.139 Cannot find device "nvmf_tgt_br" 00:08:35.139 19:29:21 -- nvmf/common.sh@154 -- # true 00:08:35.139 19:29:21 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:35.139 Cannot find device "nvmf_tgt_br2" 00:08:35.139 19:29:21 -- nvmf/common.sh@155 -- # true 00:08:35.139 19:29:21 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:35.139 19:29:21 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:35.139 Cannot find device "nvmf_tgt_br" 00:08:35.139 19:29:21 -- nvmf/common.sh@157 -- # true 00:08:35.139 19:29:21 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:35.139 Cannot find device "nvmf_tgt_br2" 00:08:35.139 19:29:21 -- nvmf/common.sh@158 -- # true 00:08:35.139 19:29:21 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:35.139 19:29:21 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:35.139 19:29:21 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:35.139 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:35.139 19:29:21 -- nvmf/common.sh@161 -- # true 00:08:35.139 19:29:21 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:35.139 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:35.139 19:29:21 -- nvmf/common.sh@162 -- # true 00:08:35.139 19:29:21 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:35.139 19:29:21 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:35.139 19:29:21 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:35.139 19:29:21 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:35.139 19:29:21 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:35.139 19:29:21 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:35.139 19:29:21 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:35.139 19:29:21 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:35.139 19:29:21 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:35.139 19:29:21 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:35.139 19:29:21 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:35.139 19:29:21 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:35.139 19:29:21 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:35.139 19:29:21 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:35.139 19:29:21 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:35.139 19:29:21 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:35.139 19:29:21 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:35.139 19:29:21 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:35.139 19:29:21 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:35.139 19:29:22 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:35.139 19:29:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:35.139 19:29:22 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:35.398 19:29:22 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:35.398 19:29:22 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:35.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:35.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:08:35.398 00:08:35.398 --- 10.0.0.2 ping statistics --- 00:08:35.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.398 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:08:35.398 19:29:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:35.398 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:35.398 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:08:35.398 00:08:35.398 --- 10.0.0.3 ping statistics --- 00:08:35.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.398 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:08:35.398 19:29:22 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:35.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:35.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:08:35.398 00:08:35.398 --- 10.0.0.1 ping statistics --- 00:08:35.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.398 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:08:35.398 19:29:22 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:35.398 19:29:22 -- nvmf/common.sh@421 -- # return 0 00:08:35.398 19:29:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:35.398 19:29:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:35.398 19:29:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:35.398 19:29:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:35.398 19:29:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:35.398 19:29:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:35.398 19:29:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:35.398 19:29:22 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:35.398 19:29:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:35.398 19:29:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:35.398 19:29:22 -- common/autotest_common.sh@10 -- # set +x 00:08:35.398 19:29:22 -- nvmf/common.sh@469 -- # nvmfpid=73203 00:08:35.398 19:29:22 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:35.398 19:29:22 -- nvmf/common.sh@470 -- # waitforlisten 73203 00:08:35.398 19:29:22 -- common/autotest_common.sh@829 -- # '[' -z 73203 ']' 00:08:35.398 19:29:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.398 19:29:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:35.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.398 19:29:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.398 19:29:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:35.398 19:29:22 -- common/autotest_common.sh@10 -- # set +x 00:08:35.398 [2024-12-15 19:29:22.145036] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:35.398 [2024-12-15 19:29:22.145126] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:35.398 [2024-12-15 19:29:22.284486] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:35.657 [2024-12-15 19:29:22.373565] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:35.657 [2024-12-15 19:29:22.373699] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:35.657 [2024-12-15 19:29:22.373711] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:35.657 [2024-12-15 19:29:22.373719] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:35.657 [2024-12-15 19:29:22.373883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.657 [2024-12-15 19:29:22.374330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:35.657 [2024-12-15 19:29:22.374946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:35.657 [2024-12-15 19:29:22.374955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.592 19:29:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:36.592 19:29:23 -- common/autotest_common.sh@862 -- # return 0 00:08:36.592 19:29:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:36.592 19:29:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:36.592 19:29:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.592 19:29:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:36.592 19:29:23 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:36.592 19:29:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.592 19:29:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.592 [2024-12-15 19:29:23.218487] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:36.592 19:29:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.592 19:29:23 -- target/discovery.sh@26 -- # seq 1 4 00:08:36.592 19:29:23 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:36.592 19:29:23 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:36.592 19:29:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.592 19:29:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.592 Null1 00:08:36.592 19:29:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.592 19:29:23 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:36.592 19:29:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.592 19:29:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.592 19:29:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.592 19:29:23 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:36.592 19:29:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.592 19:29:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.592 19:29:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.592 19:29:23 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:36.592 19:29:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.592 19:29:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.592 [2024-12-15 19:29:23.285313] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:36.592 19:29:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.592 19:29:23 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:36.592 19:29:23 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:36.592 19:29:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.592 19:29:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.592 Null2 00:08:36.592 19:29:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.592 19:29:23 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:36.592 19:29:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.592 19:29:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.592 19:29:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.592 19:29:23 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:36.592 19:29:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.592 19:29:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.592 19:29:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.592 19:29:23 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:36.592 19:29:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.592 19:29:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.592 19:29:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.592 19:29:23 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:36.592 19:29:23 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:36.592 19:29:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.592 19:29:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.592 Null3 00:08:36.592 19:29:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.592 19:29:23 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:36.592 19:29:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.592 19:29:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.592 19:29:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.592 19:29:23 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:36.592 19:29:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.592 19:29:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.592 19:29:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.592 19:29:23 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:36.592 19:29:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.592 19:29:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.592 19:29:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.592 19:29:23 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:36.592 19:29:23 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:36.592 19:29:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.592 19:29:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.592 Null4 00:08:36.592 19:29:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.592 19:29:23 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:36.592 19:29:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.592 19:29:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.592 19:29:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.592 19:29:23 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:36.592 19:29:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.592 19:29:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.592 19:29:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.593 19:29:23 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:36.593 19:29:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.593 19:29:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.593 19:29:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.593 19:29:23 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:36.593 19:29:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.593 19:29:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.593 19:29:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.593 19:29:23 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:36.593 19:29:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.593 19:29:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.593 19:29:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.593 19:29:23 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 --hostid=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -t tcp -a 10.0.0.2 -s 4420 00:08:36.852 00:08:36.852 Discovery Log Number of Records 6, Generation counter 6 00:08:36.852 =====Discovery Log Entry 0====== 00:08:36.852 trtype: tcp 00:08:36.852 adrfam: ipv4 00:08:36.852 subtype: current discovery subsystem 00:08:36.852 treq: not required 00:08:36.852 portid: 0 00:08:36.852 trsvcid: 4420 00:08:36.852 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:36.852 traddr: 10.0.0.2 00:08:36.852 eflags: explicit discovery connections, duplicate discovery information 00:08:36.852 sectype: none 00:08:36.852 =====Discovery Log Entry 1====== 00:08:36.852 trtype: tcp 00:08:36.852 adrfam: ipv4 00:08:36.852 subtype: nvme subsystem 00:08:36.852 treq: not required 00:08:36.852 portid: 0 00:08:36.852 trsvcid: 4420 00:08:36.852 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:36.852 traddr: 10.0.0.2 00:08:36.852 eflags: none 00:08:36.852 sectype: none 00:08:36.852 =====Discovery Log Entry 2====== 00:08:36.852 trtype: tcp 00:08:36.852 adrfam: ipv4 00:08:36.852 subtype: nvme subsystem 00:08:36.852 treq: not required 00:08:36.852 portid: 0 00:08:36.852 trsvcid: 4420 00:08:36.852 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:36.852 traddr: 10.0.0.2 00:08:36.852 eflags: none 00:08:36.852 sectype: none 00:08:36.852 =====Discovery Log Entry 3====== 00:08:36.852 trtype: tcp 00:08:36.852 adrfam: ipv4 00:08:36.852 subtype: nvme subsystem 00:08:36.852 treq: not required 00:08:36.852 portid: 0 00:08:36.852 trsvcid: 4420 00:08:36.852 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:36.852 traddr: 10.0.0.2 00:08:36.852 eflags: none 00:08:36.852 sectype: none 00:08:36.852 =====Discovery Log Entry 4====== 00:08:36.852 trtype: tcp 00:08:36.852 adrfam: ipv4 00:08:36.852 subtype: nvme subsystem 00:08:36.852 treq: not required 00:08:36.852 portid: 0 00:08:36.852 trsvcid: 4420 00:08:36.852 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:36.852 traddr: 10.0.0.2 00:08:36.852 eflags: none 00:08:36.852 sectype: none 00:08:36.852 =====Discovery Log Entry 5====== 00:08:36.852 trtype: tcp 00:08:36.852 adrfam: ipv4 00:08:36.852 subtype: discovery subsystem referral 00:08:36.852 treq: not required 00:08:36.852 portid: 0 00:08:36.852 trsvcid: 4430 00:08:36.852 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:36.852 traddr: 10.0.0.2 00:08:36.852 eflags: none 00:08:36.852 sectype: none 00:08:36.852 Perform nvmf subsystem discovery via RPC 00:08:36.852 19:29:23 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:36.852 19:29:23 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:36.852 19:29:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.852 19:29:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.852 [2024-12-15 19:29:23.513357] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:36.852 [ 00:08:36.852 { 00:08:36.852 "allow_any_host": true, 00:08:36.852 "hosts": [], 00:08:36.852 "listen_addresses": [ 00:08:36.852 { 00:08:36.852 "adrfam": "IPv4", 00:08:36.852 "traddr": "10.0.0.2", 00:08:36.852 "transport": "TCP", 00:08:36.852 "trsvcid": "4420", 00:08:36.852 "trtype": "TCP" 00:08:36.852 } 00:08:36.852 ], 00:08:36.852 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:36.852 "subtype": "Discovery" 00:08:36.852 }, 00:08:36.852 { 00:08:36.852 "allow_any_host": true, 00:08:36.852 "hosts": [], 00:08:36.852 "listen_addresses": [ 00:08:36.852 { 00:08:36.852 "adrfam": "IPv4", 00:08:36.852 "traddr": "10.0.0.2", 00:08:36.852 "transport": "TCP", 00:08:36.852 "trsvcid": "4420", 00:08:36.852 "trtype": "TCP" 00:08:36.852 } 00:08:36.852 ], 00:08:36.852 "max_cntlid": 65519, 00:08:36.852 "max_namespaces": 32, 00:08:36.852 "min_cntlid": 1, 00:08:36.852 "model_number": "SPDK bdev Controller", 00:08:36.852 "namespaces": [ 00:08:36.852 { 00:08:36.852 "bdev_name": "Null1", 00:08:36.852 "name": "Null1", 00:08:36.852 "nguid": "02A609BA04CE4253812A32A9AC494BB7", 00:08:36.852 "nsid": 1, 00:08:36.852 "uuid": "02a609ba-04ce-4253-812a-32a9ac494bb7" 00:08:36.852 } 00:08:36.852 ], 00:08:36.852 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:36.852 "serial_number": "SPDK00000000000001", 00:08:36.852 "subtype": "NVMe" 00:08:36.852 }, 00:08:36.852 { 00:08:36.852 "allow_any_host": true, 00:08:36.852 "hosts": [], 00:08:36.852 "listen_addresses": [ 00:08:36.852 { 00:08:36.852 "adrfam": "IPv4", 00:08:36.852 "traddr": "10.0.0.2", 00:08:36.852 "transport": "TCP", 00:08:36.852 "trsvcid": "4420", 00:08:36.852 "trtype": "TCP" 00:08:36.852 } 00:08:36.852 ], 00:08:36.852 "max_cntlid": 65519, 00:08:36.852 "max_namespaces": 32, 00:08:36.852 "min_cntlid": 1, 00:08:36.852 "model_number": "SPDK bdev Controller", 00:08:36.852 "namespaces": [ 00:08:36.852 { 00:08:36.852 "bdev_name": "Null2", 00:08:36.852 "name": "Null2", 00:08:36.852 "nguid": "B79E3452D07E436A8F3EDD23051FA637", 00:08:36.852 "nsid": 1, 00:08:36.852 "uuid": "b79e3452-d07e-436a-8f3e-dd23051fa637" 00:08:36.852 } 00:08:36.852 ], 00:08:36.852 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:36.852 "serial_number": "SPDK00000000000002", 00:08:36.852 "subtype": "NVMe" 00:08:36.852 }, 00:08:36.852 { 00:08:36.852 "allow_any_host": true, 00:08:36.852 "hosts": [], 00:08:36.852 "listen_addresses": [ 00:08:36.852 { 00:08:36.852 "adrfam": "IPv4", 00:08:36.852 "traddr": "10.0.0.2", 00:08:36.852 "transport": "TCP", 00:08:36.852 "trsvcid": "4420", 00:08:36.852 "trtype": "TCP" 00:08:36.852 } 00:08:36.852 ], 00:08:36.852 "max_cntlid": 65519, 00:08:36.852 "max_namespaces": 32, 00:08:36.852 "min_cntlid": 1, 00:08:36.852 "model_number": "SPDK bdev Controller", 00:08:36.852 "namespaces": [ 00:08:36.852 { 00:08:36.852 "bdev_name": "Null3", 00:08:36.852 "name": "Null3", 00:08:36.852 "nguid": "6EFD3A7A81EB43179D2DC6187C0815E7", 00:08:36.852 "nsid": 1, 00:08:36.852 "uuid": "6efd3a7a-81eb-4317-9d2d-c6187c0815e7" 00:08:36.852 } 00:08:36.852 ], 00:08:36.852 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:36.852 "serial_number": "SPDK00000000000003", 00:08:36.852 "subtype": "NVMe" 00:08:36.852 }, 00:08:36.852 { 00:08:36.852 "allow_any_host": true, 00:08:36.852 "hosts": [], 00:08:36.852 "listen_addresses": [ 00:08:36.852 { 00:08:36.852 "adrfam": "IPv4", 00:08:36.852 "traddr": "10.0.0.2", 00:08:36.852 "transport": "TCP", 00:08:36.852 "trsvcid": "4420", 00:08:36.852 "trtype": "TCP" 00:08:36.852 } 00:08:36.852 ], 00:08:36.852 "max_cntlid": 65519, 00:08:36.852 "max_namespaces": 32, 00:08:36.852 "min_cntlid": 1, 00:08:36.852 "model_number": "SPDK bdev Controller", 00:08:36.852 "namespaces": [ 00:08:36.852 { 00:08:36.852 "bdev_name": "Null4", 00:08:36.852 "name": "Null4", 00:08:36.852 "nguid": "CF3FCEB823094EE09672568778D5ABAD", 00:08:36.852 "nsid": 1, 00:08:36.852 "uuid": "cf3fceb8-2309-4ee0-9672-568778d5abad" 00:08:36.852 } 00:08:36.852 ], 00:08:36.852 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:36.852 "serial_number": "SPDK00000000000004", 00:08:36.852 "subtype": "NVMe" 00:08:36.852 } 00:08:36.852 ] 00:08:36.852 19:29:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.852 19:29:23 -- target/discovery.sh@42 -- # seq 1 4 00:08:36.852 19:29:23 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:36.852 19:29:23 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:36.852 19:29:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.852 19:29:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.852 19:29:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.852 19:29:23 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:36.852 19:29:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.852 19:29:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.853 19:29:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.853 19:29:23 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:36.853 19:29:23 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:36.853 19:29:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.853 19:29:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.853 19:29:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.853 19:29:23 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:36.853 19:29:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.853 19:29:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.853 19:29:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.853 19:29:23 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:36.853 19:29:23 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:36.853 19:29:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.853 19:29:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.853 19:29:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.853 19:29:23 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:36.853 19:29:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.853 19:29:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.853 19:29:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.853 19:29:23 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:36.853 19:29:23 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:36.853 19:29:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.853 19:29:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.853 19:29:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.853 19:29:23 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:36.853 19:29:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.853 19:29:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.853 19:29:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.853 19:29:23 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:36.853 19:29:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.853 19:29:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.853 19:29:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.853 19:29:23 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:36.853 19:29:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.853 19:29:23 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:36.853 19:29:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.853 19:29:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.853 19:29:23 -- target/discovery.sh@49 -- # check_bdevs= 00:08:36.853 19:29:23 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:36.853 19:29:23 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:36.853 19:29:23 -- target/discovery.sh@57 -- # nvmftestfini 00:08:36.853 19:29:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:36.853 19:29:23 -- nvmf/common.sh@116 -- # sync 00:08:36.853 19:29:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:36.853 19:29:23 -- nvmf/common.sh@119 -- # set +e 00:08:36.853 19:29:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:36.853 19:29:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:36.853 rmmod nvme_tcp 00:08:36.853 rmmod nvme_fabrics 00:08:36.853 rmmod nvme_keyring 00:08:36.853 19:29:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:36.853 19:29:23 -- nvmf/common.sh@123 -- # set -e 00:08:36.853 19:29:23 -- nvmf/common.sh@124 -- # return 0 00:08:36.853 19:29:23 -- nvmf/common.sh@477 -- # '[' -n 73203 ']' 00:08:36.853 19:29:23 -- nvmf/common.sh@478 -- # killprocess 73203 00:08:36.853 19:29:23 -- common/autotest_common.sh@936 -- # '[' -z 73203 ']' 00:08:36.853 19:29:23 -- common/autotest_common.sh@940 -- # kill -0 73203 00:08:36.853 19:29:23 -- common/autotest_common.sh@941 -- # uname 00:08:36.853 19:29:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:36.853 19:29:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73203 00:08:37.111 killing process with pid 73203 00:08:37.111 19:29:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:37.111 19:29:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:37.111 19:29:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73203' 00:08:37.111 19:29:23 -- common/autotest_common.sh@955 -- # kill 73203 00:08:37.111 [2024-12-15 19:29:23.775742] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:37.111 19:29:23 -- common/autotest_common.sh@960 -- # wait 73203 00:08:37.370 19:29:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:37.370 19:29:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:37.370 19:29:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:37.370 19:29:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:37.370 19:29:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:37.370 19:29:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.370 19:29:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:37.370 19:29:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.370 19:29:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:37.370 00:08:37.370 real 0m2.588s 00:08:37.370 user 0m7.030s 00:08:37.370 sys 0m0.684s 00:08:37.370 19:29:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:37.370 19:29:24 -- common/autotest_common.sh@10 -- # set +x 00:08:37.370 ************************************ 00:08:37.370 END TEST nvmf_discovery 00:08:37.370 ************************************ 00:08:37.370 19:29:24 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:37.370 19:29:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:37.370 19:29:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:37.370 19:29:24 -- common/autotest_common.sh@10 -- # set +x 00:08:37.370 ************************************ 00:08:37.370 START TEST nvmf_referrals 00:08:37.370 ************************************ 00:08:37.370 19:29:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:37.370 * Looking for test storage... 00:08:37.370 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:37.370 19:29:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:37.370 19:29:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:37.370 19:29:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:37.629 19:29:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:37.629 19:29:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:37.629 19:29:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:37.629 19:29:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:37.629 19:29:24 -- scripts/common.sh@335 -- # IFS=.-: 00:08:37.629 19:29:24 -- scripts/common.sh@335 -- # read -ra ver1 00:08:37.629 19:29:24 -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.629 19:29:24 -- scripts/common.sh@336 -- # read -ra ver2 00:08:37.630 19:29:24 -- scripts/common.sh@337 -- # local 'op=<' 00:08:37.630 19:29:24 -- scripts/common.sh@339 -- # ver1_l=2 00:08:37.630 19:29:24 -- scripts/common.sh@340 -- # ver2_l=1 00:08:37.630 19:29:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:37.630 19:29:24 -- scripts/common.sh@343 -- # case "$op" in 00:08:37.630 19:29:24 -- scripts/common.sh@344 -- # : 1 00:08:37.630 19:29:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:37.630 19:29:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.630 19:29:24 -- scripts/common.sh@364 -- # decimal 1 00:08:37.630 19:29:24 -- scripts/common.sh@352 -- # local d=1 00:08:37.630 19:29:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.630 19:29:24 -- scripts/common.sh@354 -- # echo 1 00:08:37.630 19:29:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:37.630 19:29:24 -- scripts/common.sh@365 -- # decimal 2 00:08:37.630 19:29:24 -- scripts/common.sh@352 -- # local d=2 00:08:37.630 19:29:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.630 19:29:24 -- scripts/common.sh@354 -- # echo 2 00:08:37.630 19:29:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:37.630 19:29:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:37.630 19:29:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:37.630 19:29:24 -- scripts/common.sh@367 -- # return 0 00:08:37.630 19:29:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.630 19:29:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:37.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.630 --rc genhtml_branch_coverage=1 00:08:37.630 --rc genhtml_function_coverage=1 00:08:37.630 --rc genhtml_legend=1 00:08:37.630 --rc geninfo_all_blocks=1 00:08:37.630 --rc geninfo_unexecuted_blocks=1 00:08:37.630 00:08:37.630 ' 00:08:37.630 19:29:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:37.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.630 --rc genhtml_branch_coverage=1 00:08:37.630 --rc genhtml_function_coverage=1 00:08:37.630 --rc genhtml_legend=1 00:08:37.630 --rc geninfo_all_blocks=1 00:08:37.630 --rc geninfo_unexecuted_blocks=1 00:08:37.630 00:08:37.630 ' 00:08:37.630 19:29:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:37.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.630 --rc genhtml_branch_coverage=1 00:08:37.630 --rc genhtml_function_coverage=1 00:08:37.630 --rc genhtml_legend=1 00:08:37.630 --rc geninfo_all_blocks=1 00:08:37.630 --rc geninfo_unexecuted_blocks=1 00:08:37.630 00:08:37.630 ' 00:08:37.630 19:29:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:37.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.630 --rc genhtml_branch_coverage=1 00:08:37.630 --rc genhtml_function_coverage=1 00:08:37.630 --rc genhtml_legend=1 00:08:37.630 --rc geninfo_all_blocks=1 00:08:37.630 --rc geninfo_unexecuted_blocks=1 00:08:37.630 00:08:37.630 ' 00:08:37.630 19:29:24 -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:37.630 19:29:24 -- nvmf/common.sh@7 -- # uname -s 00:08:37.630 19:29:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:37.630 19:29:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:37.630 19:29:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:37.630 19:29:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:37.630 19:29:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:37.630 19:29:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:37.630 19:29:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:37.630 19:29:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:37.630 19:29:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:37.630 19:29:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:37.630 19:29:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:08:37.630 19:29:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:08:37.630 19:29:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:37.630 19:29:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:37.630 19:29:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:37.630 19:29:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:37.630 19:29:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.630 19:29:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.630 19:29:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.630 19:29:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.630 19:29:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.630 19:29:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.630 19:29:24 -- paths/export.sh@5 -- # export PATH 00:08:37.630 19:29:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.630 19:29:24 -- nvmf/common.sh@46 -- # : 0 00:08:37.630 19:29:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:37.630 19:29:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:37.630 19:29:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:37.630 19:29:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:37.630 19:29:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:37.630 19:29:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:37.630 19:29:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:37.630 19:29:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:37.630 19:29:24 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:37.630 19:29:24 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:37.630 19:29:24 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:37.630 19:29:24 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:37.630 19:29:24 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:37.630 19:29:24 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:37.630 19:29:24 -- target/referrals.sh@37 -- # nvmftestinit 00:08:37.630 19:29:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:37.630 19:29:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:37.630 19:29:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:37.630 19:29:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:37.630 19:29:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:37.630 19:29:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.630 19:29:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:37.630 19:29:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.630 19:29:24 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:37.630 19:29:24 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:37.630 19:29:24 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:37.630 19:29:24 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:37.630 19:29:24 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:37.630 19:29:24 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:37.630 19:29:24 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:37.630 19:29:24 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:37.630 19:29:24 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:37.630 19:29:24 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:37.630 19:29:24 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:37.630 19:29:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:37.630 19:29:24 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:37.630 19:29:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:37.630 19:29:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:37.630 19:29:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:37.630 19:29:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:37.630 19:29:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:37.630 19:29:24 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:37.630 19:29:24 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:37.630 Cannot find device "nvmf_tgt_br" 00:08:37.630 19:29:24 -- nvmf/common.sh@154 -- # true 00:08:37.630 19:29:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:37.630 Cannot find device "nvmf_tgt_br2" 00:08:37.630 19:29:24 -- nvmf/common.sh@155 -- # true 00:08:37.630 19:29:24 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:37.630 19:29:24 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:37.630 Cannot find device "nvmf_tgt_br" 00:08:37.630 19:29:24 -- nvmf/common.sh@157 -- # true 00:08:37.630 19:29:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:37.630 Cannot find device "nvmf_tgt_br2" 00:08:37.630 19:29:24 -- nvmf/common.sh@158 -- # true 00:08:37.630 19:29:24 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:37.630 19:29:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:37.630 19:29:24 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:37.630 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:37.630 19:29:24 -- nvmf/common.sh@161 -- # true 00:08:37.631 19:29:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:37.631 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:37.631 19:29:24 -- nvmf/common.sh@162 -- # true 00:08:37.631 19:29:24 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:37.631 19:29:24 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:37.890 19:29:24 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:37.890 19:29:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:37.890 19:29:24 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:37.890 19:29:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:37.890 19:29:24 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:37.890 19:29:24 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:37.890 19:29:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:37.890 19:29:24 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:37.890 19:29:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:37.890 19:29:24 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:37.890 19:29:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:37.890 19:29:24 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:37.890 19:29:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:37.890 19:29:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:37.890 19:29:24 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:37.890 19:29:24 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:37.890 19:29:24 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:37.890 19:29:24 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:37.890 19:29:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:37.890 19:29:24 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:37.890 19:29:24 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:37.890 19:29:24 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:37.890 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:37.890 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:08:37.890 00:08:37.890 --- 10.0.0.2 ping statistics --- 00:08:37.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.890 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:08:37.890 19:29:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:37.890 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:37.890 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:08:37.890 00:08:37.890 --- 10.0.0.3 ping statistics --- 00:08:37.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.890 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:08:37.890 19:29:24 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:37.890 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:37.890 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:08:37.890 00:08:37.890 --- 10.0.0.1 ping statistics --- 00:08:37.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.890 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:08:37.890 19:29:24 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:37.890 19:29:24 -- nvmf/common.sh@421 -- # return 0 00:08:37.890 19:29:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:37.890 19:29:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:37.890 19:29:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:37.890 19:29:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:37.890 19:29:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:37.890 19:29:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:37.890 19:29:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:37.890 19:29:24 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:37.890 19:29:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:37.890 19:29:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:37.890 19:29:24 -- common/autotest_common.sh@10 -- # set +x 00:08:37.890 19:29:24 -- nvmf/common.sh@469 -- # nvmfpid=73437 00:08:37.890 19:29:24 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:37.890 19:29:24 -- nvmf/common.sh@470 -- # waitforlisten 73437 00:08:37.890 19:29:24 -- common/autotest_common.sh@829 -- # '[' -z 73437 ']' 00:08:37.890 19:29:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.890 19:29:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:37.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.890 19:29:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.890 19:29:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:37.890 19:29:24 -- common/autotest_common.sh@10 -- # set +x 00:08:38.148 [2024-12-15 19:29:24.803756] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:38.148 [2024-12-15 19:29:24.803876] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.148 [2024-12-15 19:29:24.942570] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:38.148 [2024-12-15 19:29:25.027038] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:38.148 [2024-12-15 19:29:25.027217] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:38.149 [2024-12-15 19:29:25.027230] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:38.149 [2024-12-15 19:29:25.027238] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:38.149 [2024-12-15 19:29:25.027698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.149 [2024-12-15 19:29:25.027882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:38.149 [2024-12-15 19:29:25.028309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:38.149 [2024-12-15 19:29:25.028371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.084 19:29:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:39.084 19:29:25 -- common/autotest_common.sh@862 -- # return 0 00:08:39.084 19:29:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:39.084 19:29:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:39.084 19:29:25 -- common/autotest_common.sh@10 -- # set +x 00:08:39.084 19:29:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.084 19:29:25 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:39.084 19:29:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.084 19:29:25 -- common/autotest_common.sh@10 -- # set +x 00:08:39.084 [2024-12-15 19:29:25.903785] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:39.084 19:29:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.084 19:29:25 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:39.084 19:29:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.084 19:29:25 -- common/autotest_common.sh@10 -- # set +x 00:08:39.084 [2024-12-15 19:29:25.924157] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:39.084 19:29:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.084 19:29:25 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:39.084 19:29:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.084 19:29:25 -- common/autotest_common.sh@10 -- # set +x 00:08:39.084 19:29:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.084 19:29:25 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:39.084 19:29:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.084 19:29:25 -- common/autotest_common.sh@10 -- # set +x 00:08:39.084 19:29:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.084 19:29:25 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:39.084 19:29:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.084 19:29:25 -- common/autotest_common.sh@10 -- # set +x 00:08:39.084 19:29:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.084 19:29:25 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:39.084 19:29:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.084 19:29:25 -- common/autotest_common.sh@10 -- # set +x 00:08:39.084 19:29:25 -- target/referrals.sh@48 -- # jq length 00:08:39.084 19:29:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.343 19:29:26 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:39.343 19:29:26 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:39.343 19:29:26 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:39.343 19:29:26 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:39.343 19:29:26 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:39.343 19:29:26 -- target/referrals.sh@21 -- # sort 00:08:39.343 19:29:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.343 19:29:26 -- common/autotest_common.sh@10 -- # set +x 00:08:39.343 19:29:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.343 19:29:26 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:39.343 19:29:26 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:39.343 19:29:26 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:39.343 19:29:26 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:39.343 19:29:26 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:39.343 19:29:26 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 --hostid=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:39.343 19:29:26 -- target/referrals.sh@26 -- # sort 00:08:39.343 19:29:26 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:39.343 19:29:26 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:39.343 19:29:26 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:39.343 19:29:26 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:39.343 19:29:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.343 19:29:26 -- common/autotest_common.sh@10 -- # set +x 00:08:39.343 19:29:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.343 19:29:26 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:39.343 19:29:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.343 19:29:26 -- common/autotest_common.sh@10 -- # set +x 00:08:39.343 19:29:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.343 19:29:26 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:39.343 19:29:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.343 19:29:26 -- common/autotest_common.sh@10 -- # set +x 00:08:39.343 19:29:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.343 19:29:26 -- target/referrals.sh@56 -- # jq length 00:08:39.343 19:29:26 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:39.343 19:29:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.343 19:29:26 -- common/autotest_common.sh@10 -- # set +x 00:08:39.343 19:29:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.602 19:29:26 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:39.602 19:29:26 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:39.602 19:29:26 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:39.602 19:29:26 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:39.602 19:29:26 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 --hostid=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:39.602 19:29:26 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:39.602 19:29:26 -- target/referrals.sh@26 -- # sort 00:08:39.602 19:29:26 -- target/referrals.sh@26 -- # echo 00:08:39.602 19:29:26 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:39.602 19:29:26 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:39.602 19:29:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.602 19:29:26 -- common/autotest_common.sh@10 -- # set +x 00:08:39.602 19:29:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.602 19:29:26 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:39.602 19:29:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.602 19:29:26 -- common/autotest_common.sh@10 -- # set +x 00:08:39.602 19:29:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.602 19:29:26 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:39.602 19:29:26 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:39.602 19:29:26 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:39.602 19:29:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.602 19:29:26 -- common/autotest_common.sh@10 -- # set +x 00:08:39.602 19:29:26 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:39.602 19:29:26 -- target/referrals.sh@21 -- # sort 00:08:39.602 19:29:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.860 19:29:26 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:39.860 19:29:26 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:39.860 19:29:26 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:39.860 19:29:26 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:39.860 19:29:26 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:39.860 19:29:26 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 --hostid=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:39.860 19:29:26 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:39.860 19:29:26 -- target/referrals.sh@26 -- # sort 00:08:39.860 19:29:26 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:39.860 19:29:26 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:39.860 19:29:26 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:39.860 19:29:26 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:39.860 19:29:26 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:39.860 19:29:26 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:39.860 19:29:26 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 --hostid=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:39.860 19:29:26 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:39.860 19:29:26 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:39.860 19:29:26 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:39.860 19:29:26 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:39.860 19:29:26 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 --hostid=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:39.860 19:29:26 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:40.119 19:29:26 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:40.119 19:29:26 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:40.119 19:29:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.119 19:29:26 -- common/autotest_common.sh@10 -- # set +x 00:08:40.119 19:29:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.119 19:29:26 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:40.119 19:29:26 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:40.119 19:29:26 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:40.119 19:29:26 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:40.119 19:29:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.119 19:29:26 -- target/referrals.sh@21 -- # sort 00:08:40.119 19:29:26 -- common/autotest_common.sh@10 -- # set +x 00:08:40.119 19:29:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.119 19:29:26 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:40.119 19:29:26 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:40.119 19:29:26 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:40.119 19:29:26 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:40.119 19:29:26 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:40.119 19:29:26 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 --hostid=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:40.119 19:29:26 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:40.119 19:29:26 -- target/referrals.sh@26 -- # sort 00:08:40.378 19:29:27 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:40.378 19:29:27 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:40.378 19:29:27 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:40.378 19:29:27 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:40.378 19:29:27 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:40.378 19:29:27 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 --hostid=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:40.378 19:29:27 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:40.378 19:29:27 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:40.378 19:29:27 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:40.378 19:29:27 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:40.378 19:29:27 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:40.378 19:29:27 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 --hostid=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:40.378 19:29:27 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:40.378 19:29:27 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:40.378 19:29:27 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:40.378 19:29:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.378 19:29:27 -- common/autotest_common.sh@10 -- # set +x 00:08:40.378 19:29:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.378 19:29:27 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:40.378 19:29:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.378 19:29:27 -- common/autotest_common.sh@10 -- # set +x 00:08:40.378 19:29:27 -- target/referrals.sh@82 -- # jq length 00:08:40.378 19:29:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.636 19:29:27 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:40.636 19:29:27 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:40.636 19:29:27 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:40.636 19:29:27 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:40.636 19:29:27 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 --hostid=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:40.636 19:29:27 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:40.636 19:29:27 -- target/referrals.sh@26 -- # sort 00:08:40.637 19:29:27 -- target/referrals.sh@26 -- # echo 00:08:40.637 19:29:27 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:40.637 19:29:27 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:40.637 19:29:27 -- target/referrals.sh@86 -- # nvmftestfini 00:08:40.637 19:29:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:40.637 19:29:27 -- nvmf/common.sh@116 -- # sync 00:08:40.895 19:29:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:40.895 19:29:27 -- nvmf/common.sh@119 -- # set +e 00:08:40.895 19:29:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:40.895 19:29:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:40.895 rmmod nvme_tcp 00:08:40.895 rmmod nvme_fabrics 00:08:40.895 rmmod nvme_keyring 00:08:40.895 19:29:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:40.895 19:29:27 -- nvmf/common.sh@123 -- # set -e 00:08:40.895 19:29:27 -- nvmf/common.sh@124 -- # return 0 00:08:40.895 19:29:27 -- nvmf/common.sh@477 -- # '[' -n 73437 ']' 00:08:40.895 19:29:27 -- nvmf/common.sh@478 -- # killprocess 73437 00:08:40.895 19:29:27 -- common/autotest_common.sh@936 -- # '[' -z 73437 ']' 00:08:40.895 19:29:27 -- common/autotest_common.sh@940 -- # kill -0 73437 00:08:40.895 19:29:27 -- common/autotest_common.sh@941 -- # uname 00:08:40.895 19:29:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:40.895 19:29:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73437 00:08:40.895 19:29:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:40.895 19:29:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:40.895 19:29:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73437' 00:08:40.895 killing process with pid 73437 00:08:40.895 19:29:27 -- common/autotest_common.sh@955 -- # kill 73437 00:08:40.895 19:29:27 -- common/autotest_common.sh@960 -- # wait 73437 00:08:41.153 19:29:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:41.153 19:29:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:41.153 19:29:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:41.153 19:29:27 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:41.153 19:29:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:41.153 19:29:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.153 19:29:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:41.153 19:29:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.153 19:29:27 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:41.153 00:08:41.153 real 0m3.781s 00:08:41.153 user 0m12.412s 00:08:41.153 sys 0m0.976s 00:08:41.153 19:29:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:41.153 ************************************ 00:08:41.153 END TEST nvmf_referrals 00:08:41.153 ************************************ 00:08:41.153 19:29:27 -- common/autotest_common.sh@10 -- # set +x 00:08:41.153 19:29:27 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:41.153 19:29:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:41.153 19:29:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:41.153 19:29:27 -- common/autotest_common.sh@10 -- # set +x 00:08:41.153 ************************************ 00:08:41.153 START TEST nvmf_connect_disconnect 00:08:41.153 ************************************ 00:08:41.153 19:29:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:41.413 * Looking for test storage... 00:08:41.413 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:41.413 19:29:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:41.413 19:29:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:41.413 19:29:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:41.413 19:29:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:41.413 19:29:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:41.413 19:29:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:41.413 19:29:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:41.413 19:29:28 -- scripts/common.sh@335 -- # IFS=.-: 00:08:41.413 19:29:28 -- scripts/common.sh@335 -- # read -ra ver1 00:08:41.413 19:29:28 -- scripts/common.sh@336 -- # IFS=.-: 00:08:41.413 19:29:28 -- scripts/common.sh@336 -- # read -ra ver2 00:08:41.413 19:29:28 -- scripts/common.sh@337 -- # local 'op=<' 00:08:41.413 19:29:28 -- scripts/common.sh@339 -- # ver1_l=2 00:08:41.413 19:29:28 -- scripts/common.sh@340 -- # ver2_l=1 00:08:41.413 19:29:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:41.413 19:29:28 -- scripts/common.sh@343 -- # case "$op" in 00:08:41.413 19:29:28 -- scripts/common.sh@344 -- # : 1 00:08:41.413 19:29:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:41.413 19:29:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:41.413 19:29:28 -- scripts/common.sh@364 -- # decimal 1 00:08:41.413 19:29:28 -- scripts/common.sh@352 -- # local d=1 00:08:41.413 19:29:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:41.413 19:29:28 -- scripts/common.sh@354 -- # echo 1 00:08:41.413 19:29:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:41.413 19:29:28 -- scripts/common.sh@365 -- # decimal 2 00:08:41.413 19:29:28 -- scripts/common.sh@352 -- # local d=2 00:08:41.413 19:29:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:41.413 19:29:28 -- scripts/common.sh@354 -- # echo 2 00:08:41.413 19:29:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:41.413 19:29:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:41.413 19:29:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:41.413 19:29:28 -- scripts/common.sh@367 -- # return 0 00:08:41.413 19:29:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:41.413 19:29:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:41.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.413 --rc genhtml_branch_coverage=1 00:08:41.413 --rc genhtml_function_coverage=1 00:08:41.413 --rc genhtml_legend=1 00:08:41.413 --rc geninfo_all_blocks=1 00:08:41.413 --rc geninfo_unexecuted_blocks=1 00:08:41.413 00:08:41.413 ' 00:08:41.413 19:29:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:41.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.413 --rc genhtml_branch_coverage=1 00:08:41.413 --rc genhtml_function_coverage=1 00:08:41.413 --rc genhtml_legend=1 00:08:41.413 --rc geninfo_all_blocks=1 00:08:41.413 --rc geninfo_unexecuted_blocks=1 00:08:41.413 00:08:41.413 ' 00:08:41.413 19:29:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:41.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.413 --rc genhtml_branch_coverage=1 00:08:41.413 --rc genhtml_function_coverage=1 00:08:41.413 --rc genhtml_legend=1 00:08:41.413 --rc geninfo_all_blocks=1 00:08:41.413 --rc geninfo_unexecuted_blocks=1 00:08:41.413 00:08:41.413 ' 00:08:41.413 19:29:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:41.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.413 --rc genhtml_branch_coverage=1 00:08:41.413 --rc genhtml_function_coverage=1 00:08:41.413 --rc genhtml_legend=1 00:08:41.413 --rc geninfo_all_blocks=1 00:08:41.413 --rc geninfo_unexecuted_blocks=1 00:08:41.413 00:08:41.413 ' 00:08:41.413 19:29:28 -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:41.413 19:29:28 -- nvmf/common.sh@7 -- # uname -s 00:08:41.413 19:29:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:41.413 19:29:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:41.413 19:29:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:41.413 19:29:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:41.413 19:29:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:41.413 19:29:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:41.413 19:29:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:41.413 19:29:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:41.413 19:29:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:41.413 19:29:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:41.413 19:29:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:08:41.413 19:29:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:08:41.413 19:29:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:41.413 19:29:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:41.413 19:29:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:41.413 19:29:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:41.413 19:29:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:41.413 19:29:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:41.413 19:29:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:41.413 19:29:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.413 19:29:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.413 19:29:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.413 19:29:28 -- paths/export.sh@5 -- # export PATH 00:08:41.413 19:29:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.413 19:29:28 -- nvmf/common.sh@46 -- # : 0 00:08:41.413 19:29:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:41.413 19:29:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:41.413 19:29:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:41.413 19:29:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:41.413 19:29:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:41.413 19:29:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:41.413 19:29:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:41.413 19:29:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:41.413 19:29:28 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:41.413 19:29:28 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:41.413 19:29:28 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:41.413 19:29:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:41.413 19:29:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:41.413 19:29:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:41.413 19:29:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:41.413 19:29:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:41.413 19:29:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.413 19:29:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:41.413 19:29:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.413 19:29:28 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:41.413 19:29:28 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:41.413 19:29:28 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:41.413 19:29:28 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:41.413 19:29:28 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:41.413 19:29:28 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:41.413 19:29:28 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:41.413 19:29:28 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:41.413 19:29:28 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:41.413 19:29:28 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:41.413 19:29:28 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:41.413 19:29:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:41.413 19:29:28 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:41.413 19:29:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:41.414 19:29:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:41.414 19:29:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:41.414 19:29:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:41.414 19:29:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:41.414 19:29:28 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:41.414 19:29:28 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:41.414 Cannot find device "nvmf_tgt_br" 00:08:41.414 19:29:28 -- nvmf/common.sh@154 -- # true 00:08:41.414 19:29:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:41.414 Cannot find device "nvmf_tgt_br2" 00:08:41.414 19:29:28 -- nvmf/common.sh@155 -- # true 00:08:41.414 19:29:28 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:41.414 19:29:28 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:41.414 Cannot find device "nvmf_tgt_br" 00:08:41.414 19:29:28 -- nvmf/common.sh@157 -- # true 00:08:41.414 19:29:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:41.414 Cannot find device "nvmf_tgt_br2" 00:08:41.414 19:29:28 -- nvmf/common.sh@158 -- # true 00:08:41.414 19:29:28 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:41.672 19:29:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:41.672 19:29:28 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:41.672 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:41.672 19:29:28 -- nvmf/common.sh@161 -- # true 00:08:41.672 19:29:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:41.672 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:41.672 19:29:28 -- nvmf/common.sh@162 -- # true 00:08:41.672 19:29:28 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:41.672 19:29:28 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:41.672 19:29:28 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:41.672 19:29:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:41.672 19:29:28 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:41.672 19:29:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:41.672 19:29:28 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:41.672 19:29:28 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:41.672 19:29:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:41.672 19:29:28 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:41.672 19:29:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:41.672 19:29:28 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:41.672 19:29:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:41.672 19:29:28 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:41.672 19:29:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:41.672 19:29:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:41.672 19:29:28 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:41.672 19:29:28 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:41.672 19:29:28 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:41.672 19:29:28 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:41.672 19:29:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:41.672 19:29:28 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:41.673 19:29:28 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:41.673 19:29:28 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:41.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:41.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:08:41.673 00:08:41.673 --- 10.0.0.2 ping statistics --- 00:08:41.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.673 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:08:41.673 19:29:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:41.673 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:41.673 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:08:41.673 00:08:41.673 --- 10.0.0.3 ping statistics --- 00:08:41.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.673 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:08:41.673 19:29:28 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:41.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:41.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:08:41.673 00:08:41.673 --- 10.0.0.1 ping statistics --- 00:08:41.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.673 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:08:41.673 19:29:28 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:41.673 19:29:28 -- nvmf/common.sh@421 -- # return 0 00:08:41.673 19:29:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:41.673 19:29:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:41.673 19:29:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:41.673 19:29:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:41.673 19:29:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:41.673 19:29:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:41.673 19:29:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:41.673 19:29:28 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:41.673 19:29:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:41.673 19:29:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:41.673 19:29:28 -- common/autotest_common.sh@10 -- # set +x 00:08:41.673 19:29:28 -- nvmf/common.sh@469 -- # nvmfpid=73755 00:08:41.673 19:29:28 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:41.673 19:29:28 -- nvmf/common.sh@470 -- # waitforlisten 73755 00:08:41.673 19:29:28 -- common/autotest_common.sh@829 -- # '[' -z 73755 ']' 00:08:41.673 19:29:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.673 19:29:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:41.673 19:29:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.673 19:29:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:41.673 19:29:28 -- common/autotest_common.sh@10 -- # set +x 00:08:41.931 [2024-12-15 19:29:28.586581] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:41.931 [2024-12-15 19:29:28.586691] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.931 [2024-12-15 19:29:28.723087] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:41.931 [2024-12-15 19:29:28.815245] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:41.931 [2024-12-15 19:29:28.815441] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:41.931 [2024-12-15 19:29:28.815454] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:41.931 [2024-12-15 19:29:28.815463] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:41.931 [2024-12-15 19:29:28.815606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.931 [2024-12-15 19:29:28.815776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:41.931 [2024-12-15 19:29:28.816509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:41.931 [2024-12-15 19:29:28.816550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.869 19:29:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:42.869 19:29:29 -- common/autotest_common.sh@862 -- # return 0 00:08:42.869 19:29:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:42.869 19:29:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:42.869 19:29:29 -- common/autotest_common.sh@10 -- # set +x 00:08:42.869 19:29:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:42.869 19:29:29 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:42.869 19:29:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.869 19:29:29 -- common/autotest_common.sh@10 -- # set +x 00:08:42.869 [2024-12-15 19:29:29.625284] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:42.869 19:29:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.869 19:29:29 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:42.869 19:29:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.869 19:29:29 -- common/autotest_common.sh@10 -- # set +x 00:08:42.869 19:29:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.869 19:29:29 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:42.869 19:29:29 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:42.869 19:29:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.869 19:29:29 -- common/autotest_common.sh@10 -- # set +x 00:08:42.869 19:29:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.869 19:29:29 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:42.870 19:29:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.870 19:29:29 -- common/autotest_common.sh@10 -- # set +x 00:08:42.870 19:29:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.870 19:29:29 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:42.870 19:29:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.870 19:29:29 -- common/autotest_common.sh@10 -- # set +x 00:08:42.870 [2024-12-15 19:29:29.701036] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:42.870 19:29:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.870 19:29:29 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:42.870 19:29:29 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:42.870 19:29:29 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:42.870 19:29:29 -- target/connect_disconnect.sh@34 -- # set +x 00:08:45.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.936 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.280 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.812 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.714 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.246 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.595 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.032 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.991 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.951 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.207 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.737 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.637 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.167 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.189 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.528 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.061 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.976 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.509 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.413 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.378 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.815 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.719 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.252 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.158 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.595 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.030 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.589 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.489 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.022 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.387 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.920 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.355 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.258 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.718 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.248 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.683 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.050 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.581 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.482 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.014 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.450 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.019 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.923 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.790 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.698 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.764 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.202 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.638 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.994 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.445 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.978 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.881 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.881 19:33:13 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:26.881 19:33:13 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:26.881 19:33:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:26.881 19:33:13 -- nvmf/common.sh@116 -- # sync 00:12:26.881 19:33:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:26.881 19:33:13 -- nvmf/common.sh@119 -- # set +e 00:12:26.881 19:33:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:26.881 19:33:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:26.881 rmmod nvme_tcp 00:12:26.881 rmmod nvme_fabrics 00:12:26.881 rmmod nvme_keyring 00:12:26.881 19:33:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:26.881 19:33:13 -- nvmf/common.sh@123 -- # set -e 00:12:26.881 19:33:13 -- nvmf/common.sh@124 -- # return 0 00:12:26.881 19:33:13 -- nvmf/common.sh@477 -- # '[' -n 73755 ']' 00:12:26.881 19:33:13 -- nvmf/common.sh@478 -- # killprocess 73755 00:12:26.881 19:33:13 -- common/autotest_common.sh@936 -- # '[' -z 73755 ']' 00:12:26.881 19:33:13 -- common/autotest_common.sh@940 -- # kill -0 73755 00:12:26.881 19:33:13 -- common/autotest_common.sh@941 -- # uname 00:12:26.881 19:33:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:26.881 19:33:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73755 00:12:26.881 19:33:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:26.881 19:33:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:26.881 killing process with pid 73755 00:12:26.881 19:33:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73755' 00:12:26.881 19:33:13 -- common/autotest_common.sh@955 -- # kill 73755 00:12:26.881 19:33:13 -- common/autotest_common.sh@960 -- # wait 73755 00:12:27.140 19:33:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:27.140 19:33:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:27.140 19:33:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:27.140 19:33:14 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:27.140 19:33:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:27.140 19:33:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.140 19:33:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:27.140 19:33:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.398 19:33:14 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:27.398 00:12:27.398 real 3m46.072s 00:12:27.398 user 14m38.071s 00:12:27.398 sys 0m25.228s 00:12:27.398 19:33:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:27.398 ************************************ 00:12:27.398 END TEST nvmf_connect_disconnect 00:12:27.398 19:33:14 -- common/autotest_common.sh@10 -- # set +x 00:12:27.398 ************************************ 00:12:27.398 19:33:14 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:27.398 19:33:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:27.398 19:33:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:27.398 19:33:14 -- common/autotest_common.sh@10 -- # set +x 00:12:27.398 ************************************ 00:12:27.398 START TEST nvmf_multitarget 00:12:27.398 ************************************ 00:12:27.398 19:33:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:27.398 * Looking for test storage... 00:12:27.398 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:27.398 19:33:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:27.398 19:33:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:27.398 19:33:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:27.398 19:33:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:27.398 19:33:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:27.398 19:33:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:27.398 19:33:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:27.398 19:33:14 -- scripts/common.sh@335 -- # IFS=.-: 00:12:27.398 19:33:14 -- scripts/common.sh@335 -- # read -ra ver1 00:12:27.398 19:33:14 -- scripts/common.sh@336 -- # IFS=.-: 00:12:27.398 19:33:14 -- scripts/common.sh@336 -- # read -ra ver2 00:12:27.398 19:33:14 -- scripts/common.sh@337 -- # local 'op=<' 00:12:27.398 19:33:14 -- scripts/common.sh@339 -- # ver1_l=2 00:12:27.398 19:33:14 -- scripts/common.sh@340 -- # ver2_l=1 00:12:27.398 19:33:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:27.398 19:33:14 -- scripts/common.sh@343 -- # case "$op" in 00:12:27.398 19:33:14 -- scripts/common.sh@344 -- # : 1 00:12:27.398 19:33:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:27.398 19:33:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:27.399 19:33:14 -- scripts/common.sh@364 -- # decimal 1 00:12:27.399 19:33:14 -- scripts/common.sh@352 -- # local d=1 00:12:27.399 19:33:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:27.399 19:33:14 -- scripts/common.sh@354 -- # echo 1 00:12:27.399 19:33:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:27.399 19:33:14 -- scripts/common.sh@365 -- # decimal 2 00:12:27.399 19:33:14 -- scripts/common.sh@352 -- # local d=2 00:12:27.399 19:33:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:27.399 19:33:14 -- scripts/common.sh@354 -- # echo 2 00:12:27.399 19:33:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:27.399 19:33:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:27.399 19:33:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:27.399 19:33:14 -- scripts/common.sh@367 -- # return 0 00:12:27.399 19:33:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:27.399 19:33:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:27.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.399 --rc genhtml_branch_coverage=1 00:12:27.399 --rc genhtml_function_coverage=1 00:12:27.399 --rc genhtml_legend=1 00:12:27.399 --rc geninfo_all_blocks=1 00:12:27.399 --rc geninfo_unexecuted_blocks=1 00:12:27.399 00:12:27.399 ' 00:12:27.399 19:33:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:27.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.399 --rc genhtml_branch_coverage=1 00:12:27.399 --rc genhtml_function_coverage=1 00:12:27.399 --rc genhtml_legend=1 00:12:27.399 --rc geninfo_all_blocks=1 00:12:27.399 --rc geninfo_unexecuted_blocks=1 00:12:27.399 00:12:27.399 ' 00:12:27.399 19:33:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:27.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.399 --rc genhtml_branch_coverage=1 00:12:27.399 --rc genhtml_function_coverage=1 00:12:27.399 --rc genhtml_legend=1 00:12:27.399 --rc geninfo_all_blocks=1 00:12:27.399 --rc geninfo_unexecuted_blocks=1 00:12:27.399 00:12:27.399 ' 00:12:27.399 19:33:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:27.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.399 --rc genhtml_branch_coverage=1 00:12:27.399 --rc genhtml_function_coverage=1 00:12:27.399 --rc genhtml_legend=1 00:12:27.399 --rc geninfo_all_blocks=1 00:12:27.399 --rc geninfo_unexecuted_blocks=1 00:12:27.399 00:12:27.399 ' 00:12:27.399 19:33:14 -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:27.399 19:33:14 -- nvmf/common.sh@7 -- # uname -s 00:12:27.399 19:33:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:27.399 19:33:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:27.399 19:33:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:27.399 19:33:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:27.399 19:33:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:27.399 19:33:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:27.399 19:33:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:27.399 19:33:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:27.399 19:33:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:27.399 19:33:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:27.399 19:33:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:12:27.399 19:33:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:12:27.399 19:33:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:27.399 19:33:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:27.399 19:33:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:27.399 19:33:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:27.399 19:33:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:27.399 19:33:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:27.399 19:33:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:27.399 19:33:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.399 19:33:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.399 19:33:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.399 19:33:14 -- paths/export.sh@5 -- # export PATH 00:12:27.399 19:33:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.658 19:33:14 -- nvmf/common.sh@46 -- # : 0 00:12:27.658 19:33:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:27.658 19:33:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:27.658 19:33:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:27.658 19:33:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:27.658 19:33:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:27.658 19:33:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:27.658 19:33:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:27.658 19:33:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:27.658 19:33:14 -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:27.658 19:33:14 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:27.658 19:33:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:27.658 19:33:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:27.658 19:33:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:27.658 19:33:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:27.658 19:33:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:27.658 19:33:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.658 19:33:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:27.658 19:33:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.658 19:33:14 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:27.658 19:33:14 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:27.658 19:33:14 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:27.658 19:33:14 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:27.658 19:33:14 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:27.658 19:33:14 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:27.658 19:33:14 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:27.658 19:33:14 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:27.658 19:33:14 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:27.658 19:33:14 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:27.658 19:33:14 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:27.658 19:33:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:27.658 19:33:14 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:27.658 19:33:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:27.658 19:33:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:27.658 19:33:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:27.658 19:33:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:27.658 19:33:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:27.658 19:33:14 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:27.658 19:33:14 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:27.658 Cannot find device "nvmf_tgt_br" 00:12:27.658 19:33:14 -- nvmf/common.sh@154 -- # true 00:12:27.658 19:33:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:27.658 Cannot find device "nvmf_tgt_br2" 00:12:27.658 19:33:14 -- nvmf/common.sh@155 -- # true 00:12:27.659 19:33:14 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:27.659 19:33:14 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:27.659 Cannot find device "nvmf_tgt_br" 00:12:27.659 19:33:14 -- nvmf/common.sh@157 -- # true 00:12:27.659 19:33:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:27.659 Cannot find device "nvmf_tgt_br2" 00:12:27.659 19:33:14 -- nvmf/common.sh@158 -- # true 00:12:27.659 19:33:14 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:27.659 19:33:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:27.659 19:33:14 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:27.659 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:27.659 19:33:14 -- nvmf/common.sh@161 -- # true 00:12:27.659 19:33:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:27.659 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:27.659 19:33:14 -- nvmf/common.sh@162 -- # true 00:12:27.659 19:33:14 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:27.659 19:33:14 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:27.659 19:33:14 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:27.659 19:33:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:27.659 19:33:14 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:27.659 19:33:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:27.659 19:33:14 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:27.659 19:33:14 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:27.659 19:33:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:27.659 19:33:14 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:27.659 19:33:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:27.659 19:33:14 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:27.659 19:33:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:27.659 19:33:14 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:27.659 19:33:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:27.659 19:33:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:27.918 19:33:14 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:27.918 19:33:14 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:27.918 19:33:14 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:27.918 19:33:14 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:27.918 19:33:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:27.918 19:33:14 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:27.918 19:33:14 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:27.918 19:33:14 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:27.918 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:27.918 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:12:27.918 00:12:27.918 --- 10.0.0.2 ping statistics --- 00:12:27.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.918 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:12:27.918 19:33:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:27.918 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:27.918 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:12:27.918 00:12:27.918 --- 10.0.0.3 ping statistics --- 00:12:27.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.918 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:12:27.918 19:33:14 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:27.918 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:27.918 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:12:27.918 00:12:27.918 --- 10.0.0.1 ping statistics --- 00:12:27.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.918 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:12:27.918 19:33:14 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:27.918 19:33:14 -- nvmf/common.sh@421 -- # return 0 00:12:27.918 19:33:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:27.918 19:33:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:27.918 19:33:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:27.918 19:33:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:27.918 19:33:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:27.918 19:33:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:27.918 19:33:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:27.918 19:33:14 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:27.918 19:33:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:27.918 19:33:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:27.918 19:33:14 -- common/autotest_common.sh@10 -- # set +x 00:12:27.918 19:33:14 -- nvmf/common.sh@469 -- # nvmfpid=77548 00:12:27.918 19:33:14 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:27.918 19:33:14 -- nvmf/common.sh@470 -- # waitforlisten 77548 00:12:27.918 19:33:14 -- common/autotest_common.sh@829 -- # '[' -z 77548 ']' 00:12:27.918 19:33:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.918 19:33:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:27.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.918 19:33:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.918 19:33:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:27.918 19:33:14 -- common/autotest_common.sh@10 -- # set +x 00:12:27.918 [2024-12-15 19:33:14.688537] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:27.918 [2024-12-15 19:33:14.688611] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:28.177 [2024-12-15 19:33:14.820720] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:28.177 [2024-12-15 19:33:14.898880] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:28.177 [2024-12-15 19:33:14.899034] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:28.177 [2024-12-15 19:33:14.899048] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:28.177 [2024-12-15 19:33:14.899056] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:28.177 [2024-12-15 19:33:14.899236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:28.177 [2024-12-15 19:33:14.899348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:28.177 [2024-12-15 19:33:14.900263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:28.177 [2024-12-15 19:33:14.900310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.743 19:33:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:28.743 19:33:15 -- common/autotest_common.sh@862 -- # return 0 00:12:28.743 19:33:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:28.743 19:33:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:28.743 19:33:15 -- common/autotest_common.sh@10 -- # set +x 00:12:29.001 19:33:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:29.001 19:33:15 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:29.001 19:33:15 -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:29.001 19:33:15 -- target/multitarget.sh@21 -- # jq length 00:12:29.001 19:33:15 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:29.001 19:33:15 -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:29.260 "nvmf_tgt_1" 00:12:29.260 19:33:15 -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:29.260 "nvmf_tgt_2" 00:12:29.260 19:33:16 -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:29.260 19:33:16 -- target/multitarget.sh@28 -- # jq length 00:12:29.518 19:33:16 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:29.518 19:33:16 -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:29.518 true 00:12:29.518 19:33:16 -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:29.777 true 00:12:29.777 19:33:16 -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:29.777 19:33:16 -- target/multitarget.sh@35 -- # jq length 00:12:29.777 19:33:16 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:29.777 19:33:16 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:29.777 19:33:16 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:29.777 19:33:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:29.777 19:33:16 -- nvmf/common.sh@116 -- # sync 00:12:29.777 19:33:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:29.777 19:33:16 -- nvmf/common.sh@119 -- # set +e 00:12:29.777 19:33:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:29.777 19:33:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:29.777 rmmod nvme_tcp 00:12:30.035 rmmod nvme_fabrics 00:12:30.035 rmmod nvme_keyring 00:12:30.035 19:33:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:30.035 19:33:16 -- nvmf/common.sh@123 -- # set -e 00:12:30.035 19:33:16 -- nvmf/common.sh@124 -- # return 0 00:12:30.035 19:33:16 -- nvmf/common.sh@477 -- # '[' -n 77548 ']' 00:12:30.035 19:33:16 -- nvmf/common.sh@478 -- # killprocess 77548 00:12:30.035 19:33:16 -- common/autotest_common.sh@936 -- # '[' -z 77548 ']' 00:12:30.035 19:33:16 -- common/autotest_common.sh@940 -- # kill -0 77548 00:12:30.035 19:33:16 -- common/autotest_common.sh@941 -- # uname 00:12:30.035 19:33:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:30.035 19:33:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77548 00:12:30.035 19:33:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:30.035 19:33:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:30.035 killing process with pid 77548 00:12:30.035 19:33:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77548' 00:12:30.035 19:33:16 -- common/autotest_common.sh@955 -- # kill 77548 00:12:30.035 19:33:16 -- common/autotest_common.sh@960 -- # wait 77548 00:12:30.294 19:33:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:30.294 19:33:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:30.294 19:33:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:30.294 19:33:17 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:30.294 19:33:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:30.294 19:33:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.294 19:33:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:30.294 19:33:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.294 19:33:17 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:30.294 00:12:30.294 real 0m2.952s 00:12:30.294 user 0m9.593s 00:12:30.294 sys 0m0.714s 00:12:30.294 19:33:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:30.294 19:33:17 -- common/autotest_common.sh@10 -- # set +x 00:12:30.294 ************************************ 00:12:30.294 END TEST nvmf_multitarget 00:12:30.294 ************************************ 00:12:30.294 19:33:17 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:30.294 19:33:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:30.294 19:33:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:30.294 19:33:17 -- common/autotest_common.sh@10 -- # set +x 00:12:30.294 ************************************ 00:12:30.294 START TEST nvmf_rpc 00:12:30.294 ************************************ 00:12:30.294 19:33:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:30.294 * Looking for test storage... 00:12:30.294 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:30.294 19:33:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:30.554 19:33:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:30.554 19:33:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:30.554 19:33:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:30.554 19:33:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:30.554 19:33:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:30.554 19:33:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:30.554 19:33:17 -- scripts/common.sh@335 -- # IFS=.-: 00:12:30.554 19:33:17 -- scripts/common.sh@335 -- # read -ra ver1 00:12:30.554 19:33:17 -- scripts/common.sh@336 -- # IFS=.-: 00:12:30.554 19:33:17 -- scripts/common.sh@336 -- # read -ra ver2 00:12:30.554 19:33:17 -- scripts/common.sh@337 -- # local 'op=<' 00:12:30.554 19:33:17 -- scripts/common.sh@339 -- # ver1_l=2 00:12:30.554 19:33:17 -- scripts/common.sh@340 -- # ver2_l=1 00:12:30.554 19:33:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:30.554 19:33:17 -- scripts/common.sh@343 -- # case "$op" in 00:12:30.554 19:33:17 -- scripts/common.sh@344 -- # : 1 00:12:30.554 19:33:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:30.554 19:33:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:30.554 19:33:17 -- scripts/common.sh@364 -- # decimal 1 00:12:30.554 19:33:17 -- scripts/common.sh@352 -- # local d=1 00:12:30.554 19:33:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:30.554 19:33:17 -- scripts/common.sh@354 -- # echo 1 00:12:30.554 19:33:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:30.554 19:33:17 -- scripts/common.sh@365 -- # decimal 2 00:12:30.554 19:33:17 -- scripts/common.sh@352 -- # local d=2 00:12:30.554 19:33:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:30.554 19:33:17 -- scripts/common.sh@354 -- # echo 2 00:12:30.554 19:33:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:30.554 19:33:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:30.554 19:33:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:30.554 19:33:17 -- scripts/common.sh@367 -- # return 0 00:12:30.554 19:33:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:30.554 19:33:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:30.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.554 --rc genhtml_branch_coverage=1 00:12:30.554 --rc genhtml_function_coverage=1 00:12:30.554 --rc genhtml_legend=1 00:12:30.554 --rc geninfo_all_blocks=1 00:12:30.554 --rc geninfo_unexecuted_blocks=1 00:12:30.554 00:12:30.554 ' 00:12:30.554 19:33:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:30.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.554 --rc genhtml_branch_coverage=1 00:12:30.554 --rc genhtml_function_coverage=1 00:12:30.554 --rc genhtml_legend=1 00:12:30.554 --rc geninfo_all_blocks=1 00:12:30.554 --rc geninfo_unexecuted_blocks=1 00:12:30.554 00:12:30.554 ' 00:12:30.554 19:33:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:30.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.554 --rc genhtml_branch_coverage=1 00:12:30.554 --rc genhtml_function_coverage=1 00:12:30.554 --rc genhtml_legend=1 00:12:30.554 --rc geninfo_all_blocks=1 00:12:30.554 --rc geninfo_unexecuted_blocks=1 00:12:30.554 00:12:30.554 ' 00:12:30.554 19:33:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:30.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.554 --rc genhtml_branch_coverage=1 00:12:30.554 --rc genhtml_function_coverage=1 00:12:30.554 --rc genhtml_legend=1 00:12:30.554 --rc geninfo_all_blocks=1 00:12:30.554 --rc geninfo_unexecuted_blocks=1 00:12:30.554 00:12:30.554 ' 00:12:30.554 19:33:17 -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:30.554 19:33:17 -- nvmf/common.sh@7 -- # uname -s 00:12:30.554 19:33:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:30.554 19:33:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:30.554 19:33:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:30.554 19:33:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:30.554 19:33:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:30.554 19:33:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:30.554 19:33:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:30.554 19:33:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:30.554 19:33:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:30.554 19:33:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:30.554 19:33:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:12:30.554 19:33:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:12:30.554 19:33:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:30.554 19:33:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:30.554 19:33:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:30.554 19:33:17 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:30.554 19:33:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:30.554 19:33:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:30.554 19:33:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:30.554 19:33:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.554 19:33:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.554 19:33:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.554 19:33:17 -- paths/export.sh@5 -- # export PATH 00:12:30.554 19:33:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.554 19:33:17 -- nvmf/common.sh@46 -- # : 0 00:12:30.554 19:33:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:30.554 19:33:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:30.554 19:33:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:30.554 19:33:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:30.554 19:33:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:30.554 19:33:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:30.554 19:33:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:30.554 19:33:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:30.554 19:33:17 -- target/rpc.sh@11 -- # loops=5 00:12:30.554 19:33:17 -- target/rpc.sh@23 -- # nvmftestinit 00:12:30.554 19:33:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:30.554 19:33:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:30.554 19:33:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:30.554 19:33:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:30.554 19:33:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:30.554 19:33:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.554 19:33:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:30.554 19:33:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.554 19:33:17 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:30.555 19:33:17 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:30.555 19:33:17 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:30.555 19:33:17 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:30.555 19:33:17 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:30.555 19:33:17 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:30.555 19:33:17 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:30.555 19:33:17 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:30.555 19:33:17 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:30.555 19:33:17 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:30.555 19:33:17 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:30.555 19:33:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:30.555 19:33:17 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:30.555 19:33:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:30.555 19:33:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:30.555 19:33:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:30.555 19:33:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:30.555 19:33:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:30.555 19:33:17 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:30.555 19:33:17 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:30.555 Cannot find device "nvmf_tgt_br" 00:12:30.555 19:33:17 -- nvmf/common.sh@154 -- # true 00:12:30.555 19:33:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:30.555 Cannot find device "nvmf_tgt_br2" 00:12:30.555 19:33:17 -- nvmf/common.sh@155 -- # true 00:12:30.555 19:33:17 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:30.555 19:33:17 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:30.555 Cannot find device "nvmf_tgt_br" 00:12:30.555 19:33:17 -- nvmf/common.sh@157 -- # true 00:12:30.555 19:33:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:30.555 Cannot find device "nvmf_tgt_br2" 00:12:30.555 19:33:17 -- nvmf/common.sh@158 -- # true 00:12:30.555 19:33:17 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:30.555 19:33:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:30.555 19:33:17 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:30.555 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:30.555 19:33:17 -- nvmf/common.sh@161 -- # true 00:12:30.555 19:33:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:30.555 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:30.555 19:33:17 -- nvmf/common.sh@162 -- # true 00:12:30.555 19:33:17 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:30.555 19:33:17 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:30.555 19:33:17 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:30.555 19:33:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:30.555 19:33:17 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:30.814 19:33:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:30.814 19:33:17 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:30.814 19:33:17 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:30.814 19:33:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:30.814 19:33:17 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:30.814 19:33:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:30.814 19:33:17 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:30.814 19:33:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:30.814 19:33:17 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:30.814 19:33:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:30.814 19:33:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:30.814 19:33:17 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:30.814 19:33:17 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:30.814 19:33:17 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:30.814 19:33:17 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:30.814 19:33:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:30.814 19:33:17 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:30.814 19:33:17 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:30.814 19:33:17 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:30.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:30.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:12:30.814 00:12:30.814 --- 10.0.0.2 ping statistics --- 00:12:30.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.814 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:12:30.814 19:33:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:30.814 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:30.814 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:12:30.814 00:12:30.814 --- 10.0.0.3 ping statistics --- 00:12:30.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.814 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:12:30.814 19:33:17 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:30.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:30.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:12:30.814 00:12:30.814 --- 10.0.0.1 ping statistics --- 00:12:30.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.814 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:12:30.814 19:33:17 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:30.814 19:33:17 -- nvmf/common.sh@421 -- # return 0 00:12:30.814 19:33:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:30.814 19:33:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:30.814 19:33:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:30.814 19:33:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:30.814 19:33:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:30.814 19:33:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:30.814 19:33:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:30.814 19:33:17 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:30.814 19:33:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:30.814 19:33:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:30.814 19:33:17 -- common/autotest_common.sh@10 -- # set +x 00:12:30.814 19:33:17 -- nvmf/common.sh@469 -- # nvmfpid=77788 00:12:30.814 19:33:17 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:30.814 19:33:17 -- nvmf/common.sh@470 -- # waitforlisten 77788 00:12:30.814 19:33:17 -- common/autotest_common.sh@829 -- # '[' -z 77788 ']' 00:12:30.815 19:33:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.815 19:33:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:30.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.815 19:33:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.815 19:33:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:30.815 19:33:17 -- common/autotest_common.sh@10 -- # set +x 00:12:30.815 [2024-12-15 19:33:17.680577] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:30.815 [2024-12-15 19:33:17.680669] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.073 [2024-12-15 19:33:17.818506] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:31.073 [2024-12-15 19:33:17.897742] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:31.073 [2024-12-15 19:33:17.897941] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:31.073 [2024-12-15 19:33:17.897967] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:31.073 [2024-12-15 19:33:17.897975] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:31.073 [2024-12-15 19:33:17.898131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.073 [2024-12-15 19:33:17.898716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:31.073 [2024-12-15 19:33:17.899250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.073 [2024-12-15 19:33:17.899307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:32.009 19:33:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:32.009 19:33:18 -- common/autotest_common.sh@862 -- # return 0 00:12:32.009 19:33:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:32.009 19:33:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:32.009 19:33:18 -- common/autotest_common.sh@10 -- # set +x 00:12:32.009 19:33:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:32.009 19:33:18 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:32.009 19:33:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.009 19:33:18 -- common/autotest_common.sh@10 -- # set +x 00:12:32.009 19:33:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.009 19:33:18 -- target/rpc.sh@26 -- # stats='{ 00:12:32.009 "poll_groups": [ 00:12:32.009 { 00:12:32.009 "admin_qpairs": 0, 00:12:32.009 "completed_nvme_io": 0, 00:12:32.009 "current_admin_qpairs": 0, 00:12:32.009 "current_io_qpairs": 0, 00:12:32.009 "io_qpairs": 0, 00:12:32.009 "name": "nvmf_tgt_poll_group_0", 00:12:32.009 "pending_bdev_io": 0, 00:12:32.009 "transports": [] 00:12:32.009 }, 00:12:32.009 { 00:12:32.009 "admin_qpairs": 0, 00:12:32.009 "completed_nvme_io": 0, 00:12:32.009 "current_admin_qpairs": 0, 00:12:32.009 "current_io_qpairs": 0, 00:12:32.009 "io_qpairs": 0, 00:12:32.009 "name": "nvmf_tgt_poll_group_1", 00:12:32.009 "pending_bdev_io": 0, 00:12:32.009 "transports": [] 00:12:32.009 }, 00:12:32.009 { 00:12:32.009 "admin_qpairs": 0, 00:12:32.009 "completed_nvme_io": 0, 00:12:32.009 "current_admin_qpairs": 0, 00:12:32.009 "current_io_qpairs": 0, 00:12:32.009 "io_qpairs": 0, 00:12:32.009 "name": "nvmf_tgt_poll_group_2", 00:12:32.009 "pending_bdev_io": 0, 00:12:32.009 "transports": [] 00:12:32.009 }, 00:12:32.009 { 00:12:32.009 "admin_qpairs": 0, 00:12:32.009 "completed_nvme_io": 0, 00:12:32.009 "current_admin_qpairs": 0, 00:12:32.009 "current_io_qpairs": 0, 00:12:32.009 "io_qpairs": 0, 00:12:32.009 "name": "nvmf_tgt_poll_group_3", 00:12:32.009 "pending_bdev_io": 0, 00:12:32.009 "transports": [] 00:12:32.009 } 00:12:32.009 ], 00:12:32.009 "tick_rate": 2200000000 00:12:32.009 }' 00:12:32.009 19:33:18 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:32.009 19:33:18 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:32.009 19:33:18 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:32.009 19:33:18 -- target/rpc.sh@15 -- # wc -l 00:12:32.009 19:33:18 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:32.009 19:33:18 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:32.009 19:33:18 -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:32.009 19:33:18 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:32.009 19:33:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.009 19:33:18 -- common/autotest_common.sh@10 -- # set +x 00:12:32.009 [2024-12-15 19:33:18.863089] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:32.009 19:33:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.009 19:33:18 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:32.009 19:33:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.009 19:33:18 -- common/autotest_common.sh@10 -- # set +x 00:12:32.267 19:33:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.267 19:33:18 -- target/rpc.sh@33 -- # stats='{ 00:12:32.267 "poll_groups": [ 00:12:32.267 { 00:12:32.267 "admin_qpairs": 0, 00:12:32.267 "completed_nvme_io": 0, 00:12:32.267 "current_admin_qpairs": 0, 00:12:32.267 "current_io_qpairs": 0, 00:12:32.267 "io_qpairs": 0, 00:12:32.267 "name": "nvmf_tgt_poll_group_0", 00:12:32.267 "pending_bdev_io": 0, 00:12:32.267 "transports": [ 00:12:32.267 { 00:12:32.267 "trtype": "TCP" 00:12:32.267 } 00:12:32.267 ] 00:12:32.267 }, 00:12:32.267 { 00:12:32.267 "admin_qpairs": 0, 00:12:32.267 "completed_nvme_io": 0, 00:12:32.267 "current_admin_qpairs": 0, 00:12:32.268 "current_io_qpairs": 0, 00:12:32.268 "io_qpairs": 0, 00:12:32.268 "name": "nvmf_tgt_poll_group_1", 00:12:32.268 "pending_bdev_io": 0, 00:12:32.268 "transports": [ 00:12:32.268 { 00:12:32.268 "trtype": "TCP" 00:12:32.268 } 00:12:32.268 ] 00:12:32.268 }, 00:12:32.268 { 00:12:32.268 "admin_qpairs": 0, 00:12:32.268 "completed_nvme_io": 0, 00:12:32.268 "current_admin_qpairs": 0, 00:12:32.268 "current_io_qpairs": 0, 00:12:32.268 "io_qpairs": 0, 00:12:32.268 "name": "nvmf_tgt_poll_group_2", 00:12:32.268 "pending_bdev_io": 0, 00:12:32.268 "transports": [ 00:12:32.268 { 00:12:32.268 "trtype": "TCP" 00:12:32.268 } 00:12:32.268 ] 00:12:32.268 }, 00:12:32.268 { 00:12:32.268 "admin_qpairs": 0, 00:12:32.268 "completed_nvme_io": 0, 00:12:32.268 "current_admin_qpairs": 0, 00:12:32.268 "current_io_qpairs": 0, 00:12:32.268 "io_qpairs": 0, 00:12:32.268 "name": "nvmf_tgt_poll_group_3", 00:12:32.268 "pending_bdev_io": 0, 00:12:32.268 "transports": [ 00:12:32.268 { 00:12:32.268 "trtype": "TCP" 00:12:32.268 } 00:12:32.268 ] 00:12:32.268 } 00:12:32.268 ], 00:12:32.268 "tick_rate": 2200000000 00:12:32.268 }' 00:12:32.268 19:33:18 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:32.268 19:33:18 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:32.268 19:33:18 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:32.268 19:33:18 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:32.268 19:33:18 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:32.268 19:33:18 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:32.268 19:33:18 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:32.268 19:33:18 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:32.268 19:33:18 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:32.268 19:33:19 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:32.268 19:33:19 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:32.268 19:33:19 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:32.268 19:33:19 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:32.268 19:33:19 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:32.268 19:33:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.268 19:33:19 -- common/autotest_common.sh@10 -- # set +x 00:12:32.268 Malloc1 00:12:32.268 19:33:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.268 19:33:19 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:32.268 19:33:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.268 19:33:19 -- common/autotest_common.sh@10 -- # set +x 00:12:32.268 19:33:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.268 19:33:19 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:32.268 19:33:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.268 19:33:19 -- common/autotest_common.sh@10 -- # set +x 00:12:32.268 19:33:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.268 19:33:19 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:32.268 19:33:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.268 19:33:19 -- common/autotest_common.sh@10 -- # set +x 00:12:32.268 19:33:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.268 19:33:19 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:32.268 19:33:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.268 19:33:19 -- common/autotest_common.sh@10 -- # set +x 00:12:32.268 [2024-12-15 19:33:19.076652] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:32.268 19:33:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.268 19:33:19 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 --hostid=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -a 10.0.0.2 -s 4420 00:12:32.268 19:33:19 -- common/autotest_common.sh@650 -- # local es=0 00:12:32.268 19:33:19 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 --hostid=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -a 10.0.0.2 -s 4420 00:12:32.268 19:33:19 -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:32.268 19:33:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:32.268 19:33:19 -- common/autotest_common.sh@642 -- # type -t nvme 00:12:32.268 19:33:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:32.268 19:33:19 -- common/autotest_common.sh@644 -- # type -P nvme 00:12:32.268 19:33:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:32.268 19:33:19 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:32.268 19:33:19 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:32.268 19:33:19 -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 --hostid=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -a 10.0.0.2 -s 4420 00:12:32.268 [2024-12-15 19:33:19.105066] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1' 00:12:32.268 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:32.268 could not add new controller: failed to write to nvme-fabrics device 00:12:32.268 19:33:19 -- common/autotest_common.sh@653 -- # es=1 00:12:32.268 19:33:19 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:32.268 19:33:19 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:32.268 19:33:19 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:32.268 19:33:19 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:12:32.268 19:33:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.268 19:33:19 -- common/autotest_common.sh@10 -- # set +x 00:12:32.268 19:33:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.268 19:33:19 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 --hostid=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:32.526 19:33:19 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:32.527 19:33:19 -- common/autotest_common.sh@1187 -- # local i=0 00:12:32.527 19:33:19 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:32.527 19:33:19 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:32.527 19:33:19 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:34.429 19:33:21 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:34.429 19:33:21 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:34.429 19:33:21 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:34.429 19:33:21 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:34.429 19:33:21 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:34.429 19:33:21 -- common/autotest_common.sh@1197 -- # return 0 00:12:34.429 19:33:21 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:34.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.688 19:33:21 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:34.688 19:33:21 -- common/autotest_common.sh@1208 -- # local i=0 00:12:34.688 19:33:21 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:34.688 19:33:21 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.688 19:33:21 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:34.688 19:33:21 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.688 19:33:21 -- common/autotest_common.sh@1220 -- # return 0 00:12:34.688 19:33:21 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:12:34.688 19:33:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.688 19:33:21 -- common/autotest_common.sh@10 -- # set +x 00:12:34.688 19:33:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.688 19:33:21 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 --hostid=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:34.688 19:33:21 -- common/autotest_common.sh@650 -- # local es=0 00:12:34.688 19:33:21 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 --hostid=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:34.688 19:33:21 -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:34.688 19:33:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:34.688 19:33:21 -- common/autotest_common.sh@642 -- # type -t nvme 00:12:34.688 19:33:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:34.688 19:33:21 -- common/autotest_common.sh@644 -- # type -P nvme 00:12:34.688 19:33:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:34.688 19:33:21 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:34.688 19:33:21 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:34.688 19:33:21 -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 --hostid=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:34.688 [2024-12-15 19:33:21.406036] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1' 00:12:34.688 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:34.688 could not add new controller: failed to write to nvme-fabrics device 00:12:34.688 19:33:21 -- common/autotest_common.sh@653 -- # es=1 00:12:34.688 19:33:21 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:34.688 19:33:21 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:34.688 19:33:21 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:34.688 19:33:21 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:34.688 19:33:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.688 19:33:21 -- common/autotest_common.sh@10 -- # set +x 00:12:34.688 19:33:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.688 19:33:21 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 --hostid=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:34.947 19:33:21 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:34.947 19:33:21 -- common/autotest_common.sh@1187 -- # local i=0 00:12:34.947 19:33:21 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:34.947 19:33:21 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:34.947 19:33:21 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:36.851 19:33:23 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:36.851 19:33:23 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:36.851 19:33:23 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:36.851 19:33:23 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:36.851 19:33:23 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:36.851 19:33:23 -- common/autotest_common.sh@1197 -- # return 0 00:12:36.851 19:33:23 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:36.851 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.851 19:33:23 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:36.851 19:33:23 -- common/autotest_common.sh@1208 -- # local i=0 00:12:36.851 19:33:23 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:36.851 19:33:23 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:36.851 19:33:23 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:36.851 19:33:23 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:36.851 19:33:23 -- common/autotest_common.sh@1220 -- # return 0 00:12:36.851 19:33:23 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:36.851 19:33:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.851 19:33:23 -- common/autotest_common.sh@10 -- # set +x 00:12:36.851 19:33:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.851 19:33:23 -- target/rpc.sh@81 -- # seq 1 5 00:12:36.851 19:33:23 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:36.851 19:33:23 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:36.851 19:33:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.851 19:33:23 -- common/autotest_common.sh@10 -- # set +x 00:12:36.851 19:33:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.851 19:33:23 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.851 19:33:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.851 19:33:23 -- common/autotest_common.sh@10 -- # set +x 00:12:36.851 [2024-12-15 19:33:23.707042] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.851 19:33:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.851 19:33:23 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:36.851 19:33:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.851 19:33:23 -- common/autotest_common.sh@10 -- # set +x 00:12:36.851 19:33:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.851 19:33:23 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:36.851 19:33:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.851 19:33:23 -- common/autotest_common.sh@10 -- # set +x 00:12:36.851 19:33:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.851 19:33:23 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 --hostid=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:37.110 19:33:23 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:37.110 19:33:23 -- common/autotest_common.sh@1187 -- # local i=0 00:12:37.110 19:33:23 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:37.110 19:33:23 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:37.110 19:33:23 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:39.642 19:33:25 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:39.642 19:33:25 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:39.642 19:33:25 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:39.642 19:33:25 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:39.642 19:33:25 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:39.642 19:33:25 -- common/autotest_common.sh@1197 -- # return 0 00:12:39.642 19:33:25 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:39.642 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.642 19:33:25 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:39.642 19:33:25 -- common/autotest_common.sh@1208 -- # local i=0 00:12:39.642 19:33:25 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:39.642 19:33:25 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:39.642 19:33:25 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:39.642 19:33:25 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:39.642 19:33:25 -- common/autotest_common.sh@1220 -- # return 0 00:12:39.642 19:33:25 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:39.642 19:33:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.642 19:33:25 -- common/autotest_common.sh@10 -- # set +x 00:12:39.642 19:33:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.642 19:33:26 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:39.642 19:33:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.642 19:33:26 -- common/autotest_common.sh@10 -- # set +x 00:12:39.642 19:33:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.642 19:33:26 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:39.642 19:33:26 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:39.642 19:33:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.642 19:33:26 -- common/autotest_common.sh@10 -- # set +x 00:12:39.642 19:33:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.642 19:33:26 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:39.642 19:33:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.642 19:33:26 -- common/autotest_common.sh@10 -- # set +x 00:12:39.642 [2024-12-15 19:33:26.025945] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:39.642 19:33:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.642 19:33:26 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:39.642 19:33:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.642 19:33:26 -- common/autotest_common.sh@10 -- # set +x 00:12:39.642 19:33:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.642 19:33:26 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:39.642 19:33:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.642 19:33:26 -- common/autotest_common.sh@10 -- # set +x 00:12:39.642 19:33:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.642 19:33:26 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 --hostid=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:39.642 19:33:26 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:39.642 19:33:26 -- common/autotest_common.sh@1187 -- # local i=0 00:12:39.642 19:33:26 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:39.642 19:33:26 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:39.642 19:33:26 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:41.557 19:33:28 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:41.557 19:33:28 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:41.557 19:33:28 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:41.557 19:33:28 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:41.557 19:33:28 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:41.557 19:33:28 -- common/autotest_common.sh@1197 -- # return 0 00:12:41.557 19:33:28 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:41.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.558 19:33:28 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:41.558 19:33:28 -- common/autotest_common.sh@1208 -- # local i=0 00:12:41.558 19:33:28 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:41.558 19:33:28 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.558 19:33:28 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.558 19:33:28 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:41.558 19:33:28 -- common/autotest_common.sh@1220 -- # return 0 00:12:41.558 19:33:28 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:41.558 19:33:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.558 19:33:28 -- common/autotest_common.sh@10 -- # set +x 00:12:41.558 19:33:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.558 19:33:28 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.558 19:33:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.558 19:33:28 -- common/autotest_common.sh@10 -- # set +x 00:12:41.558 19:33:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.558 19:33:28 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:41.558 19:33:28 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:41.558 19:33:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.558 19:33:28 -- common/autotest_common.sh@10 -- # set +x 00:12:41.558 19:33:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.558 19:33:28 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.558 19:33:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.558 19:33:28 -- common/autotest_common.sh@10 -- # set +x 00:12:41.558 [2024-12-15 19:33:28.344964] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.558 19:33:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.558 19:33:28 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:41.558 19:33:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.558 19:33:28 -- common/autotest_common.sh@10 -- # set +x 00:12:41.558 19:33:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.558 19:33:28 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:41.558 19:33:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.558 19:33:28 -- common/autotest_common.sh@10 -- # set +x 00:12:41.558 19:33:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.558 19:33:28 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 --hostid=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:41.830 19:33:28 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:41.830 19:33:28 -- common/autotest_common.sh@1187 -- # local i=0 00:12:41.830 19:33:28 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:41.830 19:33:28 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:41.830 19:33:28 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:43.733 19:33:30 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:43.733 19:33:30 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:43.733 19:33:30 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:43.733 19:33:30 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:43.733 19:33:30 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:43.733 19:33:30 -- common/autotest_common.sh@1197 -- # return 0 00:12:43.733 19:33:30 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:43.733 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.733 19:33:30 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:43.733 19:33:30 -- common/autotest_common.sh@1208 -- # local i=0 00:12:43.733 19:33:30 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:43.733 19:33:30 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.992 19:33:30 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:43.992 19:33:30 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.992 19:33:30 -- common/autotest_common.sh@1220 -- # return 0 00:12:43.992 19:33:30 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:43.992 19:33:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.992 19:33:30 -- common/autotest_common.sh@10 -- # set +x 00:12:43.992 19:33:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.992 19:33:30 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.992 19:33:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.992 19:33:30 -- common/autotest_common.sh@10 -- # set +x 00:12:43.992 19:33:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.992 19:33:30 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:43.992 19:33:30 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:43.992 19:33:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.992 19:33:30 -- common/autotest_common.sh@10 -- # set +x 00:12:43.992 19:33:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.992 19:33:30 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.992 19:33:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.992 19:33:30 -- common/autotest_common.sh@10 -- # set +x 00:12:43.992 [2024-12-15 19:33:30.671944] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.992 19:33:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.992 19:33:30 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:43.992 19:33:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.992 19:33:30 -- common/autotest_common.sh@10 -- # set +x 00:12:43.992 19:33:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.992 19:33:30 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:43.992 19:33:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.992 19:33:30 -- common/autotest_common.sh@10 -- # set +x 00:12:43.992 19:33:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.992 19:33:30 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 --hostid=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:43.992 19:33:30 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:43.992 19:33:30 -- common/autotest_common.sh@1187 -- # local i=0 00:12:43.992 19:33:30 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:43.992 19:33:30 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:43.992 19:33:30 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:46.527 19:33:32 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:46.527 19:33:32 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:46.527 19:33:32 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:46.527 19:33:32 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:46.527 19:33:32 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:46.527 19:33:32 -- common/autotest_common.sh@1197 -- # return 0 00:12:46.527 19:33:32 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:46.527 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.527 19:33:32 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:46.527 19:33:32 -- common/autotest_common.sh@1208 -- # local i=0 00:12:46.527 19:33:32 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:46.527 19:33:32 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.527 19:33:32 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:46.527 19:33:32 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.527 19:33:32 -- common/autotest_common.sh@1220 -- # return 0 00:12:46.527 19:33:32 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:46.527 19:33:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.527 19:33:32 -- common/autotest_common.sh@10 -- # set +x 00:12:46.527 19:33:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.527 19:33:32 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:46.527 19:33:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.527 19:33:32 -- common/autotest_common.sh@10 -- # set +x 00:12:46.527 19:33:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.527 19:33:32 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:46.527 19:33:32 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:46.527 19:33:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.527 19:33:32 -- common/autotest_common.sh@10 -- # set +x 00:12:46.527 19:33:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.527 19:33:32 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:46.527 19:33:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.527 19:33:32 -- common/autotest_common.sh@10 -- # set +x 00:12:46.527 [2024-12-15 19:33:32.994849] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.527 19:33:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.527 19:33:32 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:46.527 19:33:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.527 19:33:32 -- common/autotest_common.sh@10 -- # set +x 00:12:46.527 19:33:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.527 19:33:33 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:46.527 19:33:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.527 19:33:33 -- common/autotest_common.sh@10 -- # set +x 00:12:46.527 19:33:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.527 19:33:33 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 --hostid=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:46.527 19:33:33 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:46.527 19:33:33 -- common/autotest_common.sh@1187 -- # local i=0 00:12:46.527 19:33:33 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:46.527 19:33:33 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:46.527 19:33:33 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:48.429 19:33:35 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:48.429 19:33:35 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:48.429 19:33:35 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:48.429 19:33:35 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:48.429 19:33:35 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:48.429 19:33:35 -- common/autotest_common.sh@1197 -- # return 0 00:12:48.429 19:33:35 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:48.429 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.429 19:33:35 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:48.429 19:33:35 -- common/autotest_common.sh@1208 -- # local i=0 00:12:48.429 19:33:35 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:48.429 19:33:35 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:48.429 19:33:35 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:48.429 19:33:35 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:48.429 19:33:35 -- common/autotest_common.sh@1220 -- # return 0 00:12:48.429 19:33:35 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:48.429 19:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.429 19:33:35 -- common/autotest_common.sh@10 -- # set +x 00:12:48.429 19:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.429 19:33:35 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:48.429 19:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.429 19:33:35 -- common/autotest_common.sh@10 -- # set +x 00:12:48.429 19:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.429 19:33:35 -- target/rpc.sh@99 -- # seq 1 5 00:12:48.429 19:33:35 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:48.429 19:33:35 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:48.429 19:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.429 19:33:35 -- common/autotest_common.sh@10 -- # set +x 00:12:48.429 19:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.429 19:33:35 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:48.429 19:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.429 19:33:35 -- common/autotest_common.sh@10 -- # set +x 00:12:48.687 [2024-12-15 19:33:35.325471] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:48.687 19:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.687 19:33:35 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:48.687 19:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.687 19:33:35 -- common/autotest_common.sh@10 -- # set +x 00:12:48.687 19:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.687 19:33:35 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:48.687 19:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.687 19:33:35 -- common/autotest_common.sh@10 -- # set +x 00:12:48.687 19:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.687 19:33:35 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:48.687 19:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.687 19:33:35 -- common/autotest_common.sh@10 -- # set +x 00:12:48.687 19:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.687 19:33:35 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:48.687 19:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.687 19:33:35 -- common/autotest_common.sh@10 -- # set +x 00:12:48.687 19:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.687 19:33:35 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:48.687 19:33:35 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:48.687 19:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.687 19:33:35 -- common/autotest_common.sh@10 -- # set +x 00:12:48.687 19:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.687 19:33:35 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:48.687 19:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.687 19:33:35 -- common/autotest_common.sh@10 -- # set +x 00:12:48.687 [2024-12-15 19:33:35.373518] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:48.687 19:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.687 19:33:35 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:48.687 19:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.687 19:33:35 -- common/autotest_common.sh@10 -- # set +x 00:12:48.687 19:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.687 19:33:35 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:48.687 19:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.687 19:33:35 -- common/autotest_common.sh@10 -- # set +x 00:12:48.687 19:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.687 19:33:35 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:48.687 19:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.687 19:33:35 -- common/autotest_common.sh@10 -- # set +x 00:12:48.687 19:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.687 19:33:35 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:48.687 19:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.687 19:33:35 -- common/autotest_common.sh@10 -- # set +x 00:12:48.687 19:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.687 19:33:35 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:48.687 19:33:35 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:48.687 19:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.687 19:33:35 -- common/autotest_common.sh@10 -- # set +x 00:12:48.687 19:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.687 19:33:35 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:48.688 19:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.688 19:33:35 -- common/autotest_common.sh@10 -- # set +x 00:12:48.688 [2024-12-15 19:33:35.425550] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:48.688 19:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.688 19:33:35 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:48.688 19:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.688 19:33:35 -- common/autotest_common.sh@10 -- # set +x 00:12:48.688 19:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.688 19:33:35 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:48.688 19:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.688 19:33:35 -- common/autotest_common.sh@10 -- # set +x 00:12:48.688 19:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.688 19:33:35 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:48.688 19:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.688 19:33:35 -- common/autotest_common.sh@10 -- # set +x 00:12:48.688 19:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.688 19:33:35 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:48.688 19:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.688 19:33:35 -- common/autotest_common.sh@10 -- # set +x 00:12:48.688 19:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.688 19:33:35 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:48.688 19:33:35 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:48.688 19:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.688 19:33:35 -- common/autotest_common.sh@10 -- # set +x 00:12:48.688 19:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.688 19:33:35 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:48.688 19:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.688 19:33:35 -- common/autotest_common.sh@10 -- # set +x 00:12:48.688 [2024-12-15 19:33:35.473582] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:48.688 19:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.688 19:33:35 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:48.688 19:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.688 19:33:35 -- common/autotest_common.sh@10 -- # set +x 00:12:48.688 19:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.688 19:33:35 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:48.688 19:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.688 19:33:35 -- common/autotest_common.sh@10 -- # set +x 00:12:48.688 19:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.688 19:33:35 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:48.688 19:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.688 19:33:35 -- common/autotest_common.sh@10 -- # set +x 00:12:48.688 19:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.688 19:33:35 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:48.688 19:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.688 19:33:35 -- common/autotest_common.sh@10 -- # set +x 00:12:48.688 19:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.688 19:33:35 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:48.688 19:33:35 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:48.688 19:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.688 19:33:35 -- common/autotest_common.sh@10 -- # set +x 00:12:48.688 19:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.688 19:33:35 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:48.688 19:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.688 19:33:35 -- common/autotest_common.sh@10 -- # set +x 00:12:48.688 [2024-12-15 19:33:35.521658] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:48.688 19:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.688 19:33:35 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:48.688 19:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.688 19:33:35 -- common/autotest_common.sh@10 -- # set +x 00:12:48.688 19:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.688 19:33:35 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:48.688 19:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.688 19:33:35 -- common/autotest_common.sh@10 -- # set +x 00:12:48.688 19:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.688 19:33:35 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:48.688 19:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.688 19:33:35 -- common/autotest_common.sh@10 -- # set +x 00:12:48.688 19:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.688 19:33:35 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:48.688 19:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.688 19:33:35 -- common/autotest_common.sh@10 -- # set +x 00:12:48.688 19:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.688 19:33:35 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:48.688 19:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.688 19:33:35 -- common/autotest_common.sh@10 -- # set +x 00:12:48.688 19:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.688 19:33:35 -- target/rpc.sh@110 -- # stats='{ 00:12:48.688 "poll_groups": [ 00:12:48.688 { 00:12:48.688 "admin_qpairs": 2, 00:12:48.688 "completed_nvme_io": 181, 00:12:48.688 "current_admin_qpairs": 0, 00:12:48.688 "current_io_qpairs": 0, 00:12:48.688 "io_qpairs": 16, 00:12:48.688 "name": "nvmf_tgt_poll_group_0", 00:12:48.688 "pending_bdev_io": 0, 00:12:48.688 "transports": [ 00:12:48.688 { 00:12:48.688 "trtype": "TCP" 00:12:48.688 } 00:12:48.688 ] 00:12:48.688 }, 00:12:48.688 { 00:12:48.688 "admin_qpairs": 3, 00:12:48.688 "completed_nvme_io": 100, 00:12:48.688 "current_admin_qpairs": 0, 00:12:48.688 "current_io_qpairs": 0, 00:12:48.688 "io_qpairs": 17, 00:12:48.688 "name": "nvmf_tgt_poll_group_1", 00:12:48.688 "pending_bdev_io": 0, 00:12:48.688 "transports": [ 00:12:48.688 { 00:12:48.688 "trtype": "TCP" 00:12:48.688 } 00:12:48.688 ] 00:12:48.688 }, 00:12:48.688 { 00:12:48.688 "admin_qpairs": 1, 00:12:48.688 "completed_nvme_io": 69, 00:12:48.688 "current_admin_qpairs": 0, 00:12:48.688 "current_io_qpairs": 0, 00:12:48.688 "io_qpairs": 19, 00:12:48.688 "name": "nvmf_tgt_poll_group_2", 00:12:48.688 "pending_bdev_io": 0, 00:12:48.688 "transports": [ 00:12:48.688 { 00:12:48.688 "trtype": "TCP" 00:12:48.688 } 00:12:48.688 ] 00:12:48.688 }, 00:12:48.688 { 00:12:48.688 "admin_qpairs": 1, 00:12:48.688 "completed_nvme_io": 70, 00:12:48.688 "current_admin_qpairs": 0, 00:12:48.688 "current_io_qpairs": 0, 00:12:48.688 "io_qpairs": 18, 00:12:48.688 "name": "nvmf_tgt_poll_group_3", 00:12:48.688 "pending_bdev_io": 0, 00:12:48.688 "transports": [ 00:12:48.688 { 00:12:48.688 "trtype": "TCP" 00:12:48.688 } 00:12:48.688 ] 00:12:48.688 } 00:12:48.688 ], 00:12:48.688 "tick_rate": 2200000000 00:12:48.688 }' 00:12:48.947 19:33:35 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:48.947 19:33:35 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:48.947 19:33:35 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:48.947 19:33:35 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:48.947 19:33:35 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:48.947 19:33:35 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:48.947 19:33:35 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:48.947 19:33:35 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:48.947 19:33:35 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:48.947 19:33:35 -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:12:48.947 19:33:35 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:48.947 19:33:35 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:48.947 19:33:35 -- target/rpc.sh@123 -- # nvmftestfini 00:12:48.947 19:33:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:48.947 19:33:35 -- nvmf/common.sh@116 -- # sync 00:12:48.947 19:33:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:48.947 19:33:35 -- nvmf/common.sh@119 -- # set +e 00:12:48.947 19:33:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:48.947 19:33:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:48.947 rmmod nvme_tcp 00:12:48.947 rmmod nvme_fabrics 00:12:48.947 rmmod nvme_keyring 00:12:48.947 19:33:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:48.947 19:33:35 -- nvmf/common.sh@123 -- # set -e 00:12:48.947 19:33:35 -- nvmf/common.sh@124 -- # return 0 00:12:48.947 19:33:35 -- nvmf/common.sh@477 -- # '[' -n 77788 ']' 00:12:48.947 19:33:35 -- nvmf/common.sh@478 -- # killprocess 77788 00:12:48.947 19:33:35 -- common/autotest_common.sh@936 -- # '[' -z 77788 ']' 00:12:48.947 19:33:35 -- common/autotest_common.sh@940 -- # kill -0 77788 00:12:48.947 19:33:35 -- common/autotest_common.sh@941 -- # uname 00:12:48.947 19:33:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:48.947 19:33:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77788 00:12:48.947 19:33:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:48.947 19:33:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:48.947 killing process with pid 77788 00:12:48.947 19:33:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77788' 00:12:48.947 19:33:35 -- common/autotest_common.sh@955 -- # kill 77788 00:12:48.947 19:33:35 -- common/autotest_common.sh@960 -- # wait 77788 00:12:49.515 19:33:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:49.515 19:33:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:49.515 19:33:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:49.515 19:33:36 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:49.515 19:33:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:49.515 19:33:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.515 19:33:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:49.515 19:33:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.515 19:33:36 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:49.515 00:12:49.515 real 0m19.043s 00:12:49.515 user 1m11.648s 00:12:49.515 sys 0m2.701s 00:12:49.515 19:33:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:49.515 ************************************ 00:12:49.515 19:33:36 -- common/autotest_common.sh@10 -- # set +x 00:12:49.515 END TEST nvmf_rpc 00:12:49.515 ************************************ 00:12:49.515 19:33:36 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:49.515 19:33:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:49.515 19:33:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:49.515 19:33:36 -- common/autotest_common.sh@10 -- # set +x 00:12:49.515 ************************************ 00:12:49.515 START TEST nvmf_invalid 00:12:49.515 ************************************ 00:12:49.515 19:33:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:49.515 * Looking for test storage... 00:12:49.515 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:49.515 19:33:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:49.515 19:33:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:49.515 19:33:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:49.515 19:33:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:49.515 19:33:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:49.515 19:33:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:49.515 19:33:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:49.515 19:33:36 -- scripts/common.sh@335 -- # IFS=.-: 00:12:49.516 19:33:36 -- scripts/common.sh@335 -- # read -ra ver1 00:12:49.516 19:33:36 -- scripts/common.sh@336 -- # IFS=.-: 00:12:49.516 19:33:36 -- scripts/common.sh@336 -- # read -ra ver2 00:12:49.516 19:33:36 -- scripts/common.sh@337 -- # local 'op=<' 00:12:49.516 19:33:36 -- scripts/common.sh@339 -- # ver1_l=2 00:12:49.516 19:33:36 -- scripts/common.sh@340 -- # ver2_l=1 00:12:49.516 19:33:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:49.516 19:33:36 -- scripts/common.sh@343 -- # case "$op" in 00:12:49.516 19:33:36 -- scripts/common.sh@344 -- # : 1 00:12:49.516 19:33:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:49.516 19:33:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:49.516 19:33:36 -- scripts/common.sh@364 -- # decimal 1 00:12:49.516 19:33:36 -- scripts/common.sh@352 -- # local d=1 00:12:49.516 19:33:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:49.516 19:33:36 -- scripts/common.sh@354 -- # echo 1 00:12:49.516 19:33:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:49.516 19:33:36 -- scripts/common.sh@365 -- # decimal 2 00:12:49.516 19:33:36 -- scripts/common.sh@352 -- # local d=2 00:12:49.516 19:33:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:49.516 19:33:36 -- scripts/common.sh@354 -- # echo 2 00:12:49.516 19:33:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:49.516 19:33:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:49.516 19:33:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:49.516 19:33:36 -- scripts/common.sh@367 -- # return 0 00:12:49.516 19:33:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:49.516 19:33:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:49.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.516 --rc genhtml_branch_coverage=1 00:12:49.516 --rc genhtml_function_coverage=1 00:12:49.516 --rc genhtml_legend=1 00:12:49.516 --rc geninfo_all_blocks=1 00:12:49.516 --rc geninfo_unexecuted_blocks=1 00:12:49.516 00:12:49.516 ' 00:12:49.516 19:33:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:49.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.516 --rc genhtml_branch_coverage=1 00:12:49.516 --rc genhtml_function_coverage=1 00:12:49.516 --rc genhtml_legend=1 00:12:49.516 --rc geninfo_all_blocks=1 00:12:49.516 --rc geninfo_unexecuted_blocks=1 00:12:49.516 00:12:49.516 ' 00:12:49.516 19:33:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:49.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.516 --rc genhtml_branch_coverage=1 00:12:49.516 --rc genhtml_function_coverage=1 00:12:49.516 --rc genhtml_legend=1 00:12:49.516 --rc geninfo_all_blocks=1 00:12:49.516 --rc geninfo_unexecuted_blocks=1 00:12:49.516 00:12:49.516 ' 00:12:49.516 19:33:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:49.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.516 --rc genhtml_branch_coverage=1 00:12:49.516 --rc genhtml_function_coverage=1 00:12:49.516 --rc genhtml_legend=1 00:12:49.516 --rc geninfo_all_blocks=1 00:12:49.516 --rc geninfo_unexecuted_blocks=1 00:12:49.516 00:12:49.516 ' 00:12:49.516 19:33:36 -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:49.516 19:33:36 -- nvmf/common.sh@7 -- # uname -s 00:12:49.516 19:33:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:49.516 19:33:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:49.516 19:33:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:49.516 19:33:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:49.516 19:33:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:49.516 19:33:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:49.516 19:33:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:49.516 19:33:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:49.516 19:33:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:49.516 19:33:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:49.516 19:33:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:12:49.516 19:33:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:12:49.516 19:33:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:49.516 19:33:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:49.516 19:33:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:49.516 19:33:36 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:49.516 19:33:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:49.516 19:33:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:49.516 19:33:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:49.516 19:33:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.516 19:33:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.516 19:33:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.516 19:33:36 -- paths/export.sh@5 -- # export PATH 00:12:49.516 19:33:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.516 19:33:36 -- nvmf/common.sh@46 -- # : 0 00:12:49.516 19:33:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:49.516 19:33:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:49.516 19:33:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:49.516 19:33:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:49.516 19:33:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:49.516 19:33:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:49.516 19:33:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:49.516 19:33:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:49.516 19:33:36 -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:49.516 19:33:36 -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:49.516 19:33:36 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:49.516 19:33:36 -- target/invalid.sh@14 -- # target=foobar 00:12:49.516 19:33:36 -- target/invalid.sh@16 -- # RANDOM=0 00:12:49.516 19:33:36 -- target/invalid.sh@34 -- # nvmftestinit 00:12:49.516 19:33:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:49.516 19:33:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:49.516 19:33:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:49.516 19:33:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:49.516 19:33:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:49.516 19:33:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.516 19:33:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:49.516 19:33:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.775 19:33:36 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:49.775 19:33:36 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:49.775 19:33:36 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:49.775 19:33:36 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:49.775 19:33:36 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:49.775 19:33:36 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:49.775 19:33:36 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:49.775 19:33:36 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:49.775 19:33:36 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:49.775 19:33:36 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:49.775 19:33:36 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:49.775 19:33:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:49.775 19:33:36 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:49.775 19:33:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:49.775 19:33:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:49.775 19:33:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:49.775 19:33:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:49.775 19:33:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:49.775 19:33:36 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:49.775 19:33:36 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:49.775 Cannot find device "nvmf_tgt_br" 00:12:49.775 19:33:36 -- nvmf/common.sh@154 -- # true 00:12:49.775 19:33:36 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:49.775 Cannot find device "nvmf_tgt_br2" 00:12:49.775 19:33:36 -- nvmf/common.sh@155 -- # true 00:12:49.775 19:33:36 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:49.775 19:33:36 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:49.775 Cannot find device "nvmf_tgt_br" 00:12:49.775 19:33:36 -- nvmf/common.sh@157 -- # true 00:12:49.775 19:33:36 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:49.775 Cannot find device "nvmf_tgt_br2" 00:12:49.775 19:33:36 -- nvmf/common.sh@158 -- # true 00:12:49.775 19:33:36 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:49.775 19:33:36 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:49.775 19:33:36 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:49.775 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:49.775 19:33:36 -- nvmf/common.sh@161 -- # true 00:12:49.775 19:33:36 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:49.775 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:49.775 19:33:36 -- nvmf/common.sh@162 -- # true 00:12:49.775 19:33:36 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:49.776 19:33:36 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:49.776 19:33:36 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:49.776 19:33:36 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:49.776 19:33:36 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:49.776 19:33:36 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:49.776 19:33:36 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:49.776 19:33:36 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:49.776 19:33:36 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:49.776 19:33:36 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:49.776 19:33:36 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:49.776 19:33:36 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:49.776 19:33:36 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:49.776 19:33:36 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:49.776 19:33:36 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:50.035 19:33:36 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:50.035 19:33:36 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:50.035 19:33:36 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:50.035 19:33:36 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:50.035 19:33:36 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:50.035 19:33:36 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:50.035 19:33:36 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:50.035 19:33:36 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:50.035 19:33:36 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:50.035 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:50.035 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:12:50.035 00:12:50.035 --- 10.0.0.2 ping statistics --- 00:12:50.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.035 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:12:50.035 19:33:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:50.035 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:50.035 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:12:50.035 00:12:50.035 --- 10.0.0.3 ping statistics --- 00:12:50.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.035 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:12:50.035 19:33:36 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:50.035 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:50.035 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:12:50.035 00:12:50.035 --- 10.0.0.1 ping statistics --- 00:12:50.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.035 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:12:50.035 19:33:36 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:50.035 19:33:36 -- nvmf/common.sh@421 -- # return 0 00:12:50.035 19:33:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:50.035 19:33:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:50.035 19:33:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:50.035 19:33:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:50.035 19:33:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:50.035 19:33:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:50.035 19:33:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:50.035 19:33:36 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:50.035 19:33:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:50.035 19:33:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:50.035 19:33:36 -- common/autotest_common.sh@10 -- # set +x 00:12:50.035 19:33:36 -- nvmf/common.sh@469 -- # nvmfpid=78302 00:12:50.035 19:33:36 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:50.035 19:33:36 -- nvmf/common.sh@470 -- # waitforlisten 78302 00:12:50.035 19:33:36 -- common/autotest_common.sh@829 -- # '[' -z 78302 ']' 00:12:50.035 19:33:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.035 19:33:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:50.035 19:33:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.035 19:33:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:50.035 19:33:36 -- common/autotest_common.sh@10 -- # set +x 00:12:50.035 [2024-12-15 19:33:36.832276] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:50.035 [2024-12-15 19:33:36.832391] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:50.294 [2024-12-15 19:33:36.968305] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:50.294 [2024-12-15 19:33:37.050857] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:50.294 [2024-12-15 19:33:37.051001] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:50.294 [2024-12-15 19:33:37.051014] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:50.294 [2024-12-15 19:33:37.051021] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:50.294 [2024-12-15 19:33:37.051161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:50.294 [2024-12-15 19:33:37.051552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:50.294 [2024-12-15 19:33:37.052116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:50.294 [2024-12-15 19:33:37.052124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.230 19:33:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:51.230 19:33:37 -- common/autotest_common.sh@862 -- # return 0 00:12:51.230 19:33:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:51.230 19:33:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:51.230 19:33:37 -- common/autotest_common.sh@10 -- # set +x 00:12:51.230 19:33:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:51.230 19:33:37 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:51.230 19:33:37 -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode139 00:12:51.230 [2024-12-15 19:33:38.098408] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:51.230 19:33:38 -- target/invalid.sh@40 -- # out='2024/12/15 19:33:38 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode139 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:51.230 request: 00:12:51.230 { 00:12:51.230 "method": "nvmf_create_subsystem", 00:12:51.230 "params": { 00:12:51.230 "nqn": "nqn.2016-06.io.spdk:cnode139", 00:12:51.230 "tgt_name": "foobar" 00:12:51.230 } 00:12:51.230 } 00:12:51.230 Got JSON-RPC error response 00:12:51.230 GoRPCClient: error on JSON-RPC call' 00:12:51.230 19:33:38 -- target/invalid.sh@41 -- # [[ 2024/12/15 19:33:38 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode139 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:51.230 request: 00:12:51.230 { 00:12:51.230 "method": "nvmf_create_subsystem", 00:12:51.230 "params": { 00:12:51.230 "nqn": "nqn.2016-06.io.spdk:cnode139", 00:12:51.230 "tgt_name": "foobar" 00:12:51.230 } 00:12:51.230 } 00:12:51.230 Got JSON-RPC error response 00:12:51.230 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:51.489 19:33:38 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:51.489 19:33:38 -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode11613 00:12:51.748 [2024-12-15 19:33:38.398728] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11613: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:51.748 19:33:38 -- target/invalid.sh@45 -- # out='2024/12/15 19:33:38 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode11613 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:51.748 request: 00:12:51.748 { 00:12:51.748 "method": "nvmf_create_subsystem", 00:12:51.748 "params": { 00:12:51.748 "nqn": "nqn.2016-06.io.spdk:cnode11613", 00:12:51.748 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:51.748 } 00:12:51.748 } 00:12:51.748 Got JSON-RPC error response 00:12:51.748 GoRPCClient: error on JSON-RPC call' 00:12:51.748 19:33:38 -- target/invalid.sh@46 -- # [[ 2024/12/15 19:33:38 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode11613 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:51.748 request: 00:12:51.748 { 00:12:51.748 "method": "nvmf_create_subsystem", 00:12:51.748 "params": { 00:12:51.748 "nqn": "nqn.2016-06.io.spdk:cnode11613", 00:12:51.748 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:51.748 } 00:12:51.748 } 00:12:51.748 Got JSON-RPC error response 00:12:51.748 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:51.748 19:33:38 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:51.748 19:33:38 -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode13080 00:12:52.007 [2024-12-15 19:33:38.699019] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13080: invalid model number 'SPDK_Controller' 00:12:52.007 19:33:38 -- target/invalid.sh@50 -- # out='2024/12/15 19:33:38 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode13080], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:52.007 request: 00:12:52.007 { 00:12:52.007 "method": "nvmf_create_subsystem", 00:12:52.007 "params": { 00:12:52.007 "nqn": "nqn.2016-06.io.spdk:cnode13080", 00:12:52.007 "model_number": "SPDK_Controller\u001f" 00:12:52.007 } 00:12:52.007 } 00:12:52.007 Got JSON-RPC error response 00:12:52.007 GoRPCClient: error on JSON-RPC call' 00:12:52.007 19:33:38 -- target/invalid.sh@51 -- # [[ 2024/12/15 19:33:38 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode13080], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:52.007 request: 00:12:52.007 { 00:12:52.007 "method": "nvmf_create_subsystem", 00:12:52.007 "params": { 00:12:52.007 "nqn": "nqn.2016-06.io.spdk:cnode13080", 00:12:52.007 "model_number": "SPDK_Controller\u001f" 00:12:52.007 } 00:12:52.007 } 00:12:52.007 Got JSON-RPC error response 00:12:52.007 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:52.007 19:33:38 -- target/invalid.sh@54 -- # gen_random_s 21 00:12:52.007 19:33:38 -- target/invalid.sh@19 -- # local length=21 ll 00:12:52.007 19:33:38 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:52.007 19:33:38 -- target/invalid.sh@21 -- # local chars 00:12:52.007 19:33:38 -- target/invalid.sh@22 -- # local string 00:12:52.007 19:33:38 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:52.007 19:33:38 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # printf %x 93 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # string+=']' 00:12:52.007 19:33:38 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.007 19:33:38 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # printf %x 52 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # string+=4 00:12:52.007 19:33:38 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.007 19:33:38 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # printf %x 45 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # string+=- 00:12:52.007 19:33:38 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.007 19:33:38 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # printf %x 56 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # string+=8 00:12:52.007 19:33:38 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.007 19:33:38 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # printf %x 55 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # string+=7 00:12:52.007 19:33:38 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.007 19:33:38 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # printf %x 87 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # string+=W 00:12:52.007 19:33:38 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.007 19:33:38 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # printf %x 89 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # string+=Y 00:12:52.007 19:33:38 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.007 19:33:38 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # printf %x 127 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # string+=$'\177' 00:12:52.007 19:33:38 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.007 19:33:38 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # printf %x 77 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # string+=M 00:12:52.007 19:33:38 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.007 19:33:38 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # printf %x 42 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # string+='*' 00:12:52.007 19:33:38 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.007 19:33:38 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # printf %x 72 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # string+=H 00:12:52.007 19:33:38 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.007 19:33:38 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # printf %x 62 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # string+='>' 00:12:52.007 19:33:38 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.007 19:33:38 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # printf %x 84 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # string+=T 00:12:52.007 19:33:38 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.007 19:33:38 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # printf %x 103 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # string+=g 00:12:52.007 19:33:38 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.007 19:33:38 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # printf %x 102 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # string+=f 00:12:52.007 19:33:38 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.007 19:33:38 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # printf %x 88 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # string+=X 00:12:52.007 19:33:38 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.007 19:33:38 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # printf %x 80 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:52.007 19:33:38 -- target/invalid.sh@25 -- # string+=P 00:12:52.007 19:33:38 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.007 19:33:38 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.008 19:33:38 -- target/invalid.sh@25 -- # printf %x 73 00:12:52.008 19:33:38 -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:52.008 19:33:38 -- target/invalid.sh@25 -- # string+=I 00:12:52.008 19:33:38 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.008 19:33:38 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.008 19:33:38 -- target/invalid.sh@25 -- # printf %x 51 00:12:52.008 19:33:38 -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:52.008 19:33:38 -- target/invalid.sh@25 -- # string+=3 00:12:52.008 19:33:38 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.008 19:33:38 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.008 19:33:38 -- target/invalid.sh@25 -- # printf %x 94 00:12:52.008 19:33:38 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:52.008 19:33:38 -- target/invalid.sh@25 -- # string+='^' 00:12:52.008 19:33:38 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.008 19:33:38 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.008 19:33:38 -- target/invalid.sh@25 -- # printf %x 113 00:12:52.008 19:33:38 -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:52.008 19:33:38 -- target/invalid.sh@25 -- # string+=q 00:12:52.008 19:33:38 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.008 19:33:38 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.008 19:33:38 -- target/invalid.sh@28 -- # [[ ] == \- ]] 00:12:52.008 19:33:38 -- target/invalid.sh@31 -- # echo ']4-87WYM*H>TgfXPI3^q' 00:12:52.008 19:33:38 -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s ']4-87WYM*H>TgfXPI3^q' nqn.2016-06.io.spdk:cnode30050 00:12:52.266 [2024-12-15 19:33:39.119373] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30050: invalid serial number ']4-87WYM*H>TgfXPI3^q' 00:12:52.266 19:33:39 -- target/invalid.sh@54 -- # out='2024/12/15 19:33:39 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode30050 serial_number:]4-87WYM*H>TgfXPI3^q], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN ]4-87WYM*H>TgfXPI3^q 00:12:52.266 request: 00:12:52.266 { 00:12:52.267 "method": "nvmf_create_subsystem", 00:12:52.267 "params": { 00:12:52.267 "nqn": "nqn.2016-06.io.spdk:cnode30050", 00:12:52.267 "serial_number": "]4-87WY\u007fM*H>TgfXPI3^q" 00:12:52.267 } 00:12:52.267 } 00:12:52.267 Got JSON-RPC error response 00:12:52.267 GoRPCClient: error on JSON-RPC call' 00:12:52.267 19:33:39 -- target/invalid.sh@55 -- # [[ 2024/12/15 19:33:39 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode30050 serial_number:]4-87WYM*H>TgfXPI3^q], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN ]4-87WYM*H>TgfXPI3^q 00:12:52.267 request: 00:12:52.267 { 00:12:52.267 "method": "nvmf_create_subsystem", 00:12:52.267 "params": { 00:12:52.267 "nqn": "nqn.2016-06.io.spdk:cnode30050", 00:12:52.267 "serial_number": "]4-87WY\u007fM*H>TgfXPI3^q" 00:12:52.267 } 00:12:52.267 } 00:12:52.267 Got JSON-RPC error response 00:12:52.267 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:52.267 19:33:39 -- target/invalid.sh@58 -- # gen_random_s 41 00:12:52.267 19:33:39 -- target/invalid.sh@19 -- # local length=41 ll 00:12:52.267 19:33:39 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:52.267 19:33:39 -- target/invalid.sh@21 -- # local chars 00:12:52.267 19:33:39 -- target/invalid.sh@22 -- # local string 00:12:52.267 19:33:39 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:52.267 19:33:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.267 19:33:39 -- target/invalid.sh@25 -- # printf %x 37 00:12:52.267 19:33:39 -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:52.267 19:33:39 -- target/invalid.sh@25 -- # string+=% 00:12:52.267 19:33:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.267 19:33:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.267 19:33:39 -- target/invalid.sh@25 -- # printf %x 84 00:12:52.267 19:33:39 -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:52.267 19:33:39 -- target/invalid.sh@25 -- # string+=T 00:12:52.267 19:33:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.267 19:33:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.267 19:33:39 -- target/invalid.sh@25 -- # printf %x 72 00:12:52.267 19:33:39 -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:52.267 19:33:39 -- target/invalid.sh@25 -- # string+=H 00:12:52.267 19:33:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.526 19:33:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # printf %x 94 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # string+='^' 00:12:52.526 19:33:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.526 19:33:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # printf %x 37 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # string+=% 00:12:52.526 19:33:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.526 19:33:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # printf %x 114 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # string+=r 00:12:52.526 19:33:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.526 19:33:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # printf %x 45 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # string+=- 00:12:52.526 19:33:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.526 19:33:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # printf %x 109 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # string+=m 00:12:52.526 19:33:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.526 19:33:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # printf %x 115 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # string+=s 00:12:52.526 19:33:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.526 19:33:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # printf %x 65 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # string+=A 00:12:52.526 19:33:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.526 19:33:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # printf %x 123 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # string+='{' 00:12:52.526 19:33:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.526 19:33:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # printf %x 67 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # string+=C 00:12:52.526 19:33:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.526 19:33:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # printf %x 48 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # string+=0 00:12:52.526 19:33:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.526 19:33:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # printf %x 82 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # string+=R 00:12:52.526 19:33:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.526 19:33:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # printf %x 34 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # string+='"' 00:12:52.526 19:33:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.526 19:33:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # printf %x 113 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # string+=q 00:12:52.526 19:33:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.526 19:33:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # printf %x 102 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # string+=f 00:12:52.526 19:33:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.526 19:33:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # printf %x 123 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # string+='{' 00:12:52.526 19:33:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.526 19:33:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # printf %x 85 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # string+=U 00:12:52.526 19:33:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.526 19:33:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # printf %x 60 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # string+='<' 00:12:52.526 19:33:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.526 19:33:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # printf %x 118 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # string+=v 00:12:52.526 19:33:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.526 19:33:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # printf %x 46 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # string+=. 00:12:52.526 19:33:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.526 19:33:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # printf %x 104 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:52.526 19:33:39 -- target/invalid.sh@25 -- # string+=h 00:12:52.526 19:33:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.527 19:33:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # printf %x 69 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # string+=E 00:12:52.527 19:33:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.527 19:33:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # printf %x 58 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # string+=: 00:12:52.527 19:33:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.527 19:33:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # printf %x 65 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # string+=A 00:12:52.527 19:33:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.527 19:33:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # printf %x 115 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # string+=s 00:12:52.527 19:33:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.527 19:33:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # printf %x 71 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # string+=G 00:12:52.527 19:33:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.527 19:33:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # printf %x 45 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # string+=- 00:12:52.527 19:33:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.527 19:33:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # printf %x 106 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # string+=j 00:12:52.527 19:33:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.527 19:33:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # printf %x 72 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # string+=H 00:12:52.527 19:33:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.527 19:33:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # printf %x 92 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # string+='\' 00:12:52.527 19:33:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.527 19:33:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # printf %x 124 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # string+='|' 00:12:52.527 19:33:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.527 19:33:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # printf %x 41 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # string+=')' 00:12:52.527 19:33:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.527 19:33:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # printf %x 122 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # string+=z 00:12:52.527 19:33:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.527 19:33:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # printf %x 90 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # string+=Z 00:12:52.527 19:33:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.527 19:33:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # printf %x 76 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # string+=L 00:12:52.527 19:33:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.527 19:33:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # printf %x 99 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # string+=c 00:12:52.527 19:33:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.527 19:33:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # printf %x 63 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # string+='?' 00:12:52.527 19:33:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.527 19:33:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # printf %x 81 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # string+=Q 00:12:52.527 19:33:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.527 19:33:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # printf %x 65 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:52.527 19:33:39 -- target/invalid.sh@25 -- # string+=A 00:12:52.527 19:33:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.527 19:33:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.527 19:33:39 -- target/invalid.sh@28 -- # [[ % == \- ]] 00:12:52.527 19:33:39 -- target/invalid.sh@31 -- # echo '%TH^%r-msA{C0R"qf{U /dev/null' 00:12:55.428 19:33:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.428 19:33:42 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:55.428 ************************************ 00:12:55.428 END TEST nvmf_invalid 00:12:55.428 ************************************ 00:12:55.428 00:12:55.428 real 0m6.068s 00:12:55.428 user 0m24.083s 00:12:55.428 sys 0m1.368s 00:12:55.428 19:33:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:55.428 19:33:42 -- common/autotest_common.sh@10 -- # set +x 00:12:55.428 19:33:42 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:55.428 19:33:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:55.428 19:33:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:55.428 19:33:42 -- common/autotest_common.sh@10 -- # set +x 00:12:55.428 ************************************ 00:12:55.428 START TEST nvmf_abort 00:12:55.428 ************************************ 00:12:55.428 19:33:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:55.687 * Looking for test storage... 00:12:55.687 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:55.687 19:33:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:55.687 19:33:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:55.687 19:33:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:55.687 19:33:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:55.687 19:33:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:55.687 19:33:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:55.687 19:33:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:55.687 19:33:42 -- scripts/common.sh@335 -- # IFS=.-: 00:12:55.687 19:33:42 -- scripts/common.sh@335 -- # read -ra ver1 00:12:55.687 19:33:42 -- scripts/common.sh@336 -- # IFS=.-: 00:12:55.687 19:33:42 -- scripts/common.sh@336 -- # read -ra ver2 00:12:55.687 19:33:42 -- scripts/common.sh@337 -- # local 'op=<' 00:12:55.687 19:33:42 -- scripts/common.sh@339 -- # ver1_l=2 00:12:55.687 19:33:42 -- scripts/common.sh@340 -- # ver2_l=1 00:12:55.687 19:33:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:55.687 19:33:42 -- scripts/common.sh@343 -- # case "$op" in 00:12:55.687 19:33:42 -- scripts/common.sh@344 -- # : 1 00:12:55.687 19:33:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:55.687 19:33:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:55.687 19:33:42 -- scripts/common.sh@364 -- # decimal 1 00:12:55.687 19:33:42 -- scripts/common.sh@352 -- # local d=1 00:12:55.687 19:33:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:55.687 19:33:42 -- scripts/common.sh@354 -- # echo 1 00:12:55.687 19:33:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:55.687 19:33:42 -- scripts/common.sh@365 -- # decimal 2 00:12:55.687 19:33:42 -- scripts/common.sh@352 -- # local d=2 00:12:55.687 19:33:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:55.687 19:33:42 -- scripts/common.sh@354 -- # echo 2 00:12:55.687 19:33:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:55.688 19:33:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:55.688 19:33:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:55.688 19:33:42 -- scripts/common.sh@367 -- # return 0 00:12:55.688 19:33:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:55.688 19:33:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:55.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.688 --rc genhtml_branch_coverage=1 00:12:55.688 --rc genhtml_function_coverage=1 00:12:55.688 --rc genhtml_legend=1 00:12:55.688 --rc geninfo_all_blocks=1 00:12:55.688 --rc geninfo_unexecuted_blocks=1 00:12:55.688 00:12:55.688 ' 00:12:55.688 19:33:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:55.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.688 --rc genhtml_branch_coverage=1 00:12:55.688 --rc genhtml_function_coverage=1 00:12:55.688 --rc genhtml_legend=1 00:12:55.688 --rc geninfo_all_blocks=1 00:12:55.688 --rc geninfo_unexecuted_blocks=1 00:12:55.688 00:12:55.688 ' 00:12:55.688 19:33:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:55.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.688 --rc genhtml_branch_coverage=1 00:12:55.688 --rc genhtml_function_coverage=1 00:12:55.688 --rc genhtml_legend=1 00:12:55.688 --rc geninfo_all_blocks=1 00:12:55.688 --rc geninfo_unexecuted_blocks=1 00:12:55.688 00:12:55.688 ' 00:12:55.688 19:33:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:55.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.688 --rc genhtml_branch_coverage=1 00:12:55.688 --rc genhtml_function_coverage=1 00:12:55.688 --rc genhtml_legend=1 00:12:55.688 --rc geninfo_all_blocks=1 00:12:55.688 --rc geninfo_unexecuted_blocks=1 00:12:55.688 00:12:55.688 ' 00:12:55.688 19:33:42 -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:55.688 19:33:42 -- nvmf/common.sh@7 -- # uname -s 00:12:55.688 19:33:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:55.688 19:33:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:55.688 19:33:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:55.688 19:33:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:55.688 19:33:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:55.688 19:33:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:55.688 19:33:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:55.688 19:33:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:55.688 19:33:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:55.688 19:33:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:55.688 19:33:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:12:55.688 19:33:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:12:55.688 19:33:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:55.688 19:33:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:55.688 19:33:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:55.688 19:33:42 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:55.688 19:33:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:55.688 19:33:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:55.688 19:33:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:55.688 19:33:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.688 19:33:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.688 19:33:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.688 19:33:42 -- paths/export.sh@5 -- # export PATH 00:12:55.688 19:33:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.688 19:33:42 -- nvmf/common.sh@46 -- # : 0 00:12:55.688 19:33:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:55.688 19:33:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:55.688 19:33:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:55.688 19:33:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:55.688 19:33:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:55.688 19:33:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:55.688 19:33:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:55.688 19:33:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:55.688 19:33:42 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:55.688 19:33:42 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:55.688 19:33:42 -- target/abort.sh@14 -- # nvmftestinit 00:12:55.688 19:33:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:55.688 19:33:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:55.688 19:33:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:55.688 19:33:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:55.688 19:33:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:55.688 19:33:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.688 19:33:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:55.688 19:33:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.688 19:33:42 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:55.688 19:33:42 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:55.688 19:33:42 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:55.688 19:33:42 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:55.688 19:33:42 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:55.688 19:33:42 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:55.688 19:33:42 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:55.688 19:33:42 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:55.688 19:33:42 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:55.688 19:33:42 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:55.688 19:33:42 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:55.688 19:33:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:55.688 19:33:42 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:55.688 19:33:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:55.688 19:33:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:55.688 19:33:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:55.688 19:33:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:55.688 19:33:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:55.688 19:33:42 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:55.688 19:33:42 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:55.688 Cannot find device "nvmf_tgt_br" 00:12:55.688 19:33:42 -- nvmf/common.sh@154 -- # true 00:12:55.688 19:33:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:55.688 Cannot find device "nvmf_tgt_br2" 00:12:55.688 19:33:42 -- nvmf/common.sh@155 -- # true 00:12:55.688 19:33:42 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:55.688 19:33:42 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:55.688 Cannot find device "nvmf_tgt_br" 00:12:55.688 19:33:42 -- nvmf/common.sh@157 -- # true 00:12:55.688 19:33:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:55.688 Cannot find device "nvmf_tgt_br2" 00:12:55.688 19:33:42 -- nvmf/common.sh@158 -- # true 00:12:55.688 19:33:42 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:55.947 19:33:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:55.947 19:33:42 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:55.947 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:55.947 19:33:42 -- nvmf/common.sh@161 -- # true 00:12:55.947 19:33:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:55.947 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:55.947 19:33:42 -- nvmf/common.sh@162 -- # true 00:12:55.947 19:33:42 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:55.947 19:33:42 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:55.947 19:33:42 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:55.947 19:33:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:55.947 19:33:42 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:55.948 19:33:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:55.948 19:33:42 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:55.948 19:33:42 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:55.948 19:33:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:55.948 19:33:42 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:55.948 19:33:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:55.948 19:33:42 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:55.948 19:33:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:55.948 19:33:42 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:55.948 19:33:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:55.948 19:33:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:55.948 19:33:42 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:55.948 19:33:42 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:55.948 19:33:42 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:55.948 19:33:42 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:55.948 19:33:42 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:55.948 19:33:42 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:55.948 19:33:42 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:55.948 19:33:42 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:55.948 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:55.948 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:12:55.948 00:12:55.948 --- 10.0.0.2 ping statistics --- 00:12:55.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.948 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:12:55.948 19:33:42 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:55.948 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:55.948 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:12:55.948 00:12:55.948 --- 10.0.0.3 ping statistics --- 00:12:55.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.948 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:12:55.948 19:33:42 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:55.948 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:55.948 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:12:55.948 00:12:55.948 --- 10.0.0.1 ping statistics --- 00:12:55.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.948 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:12:55.948 19:33:42 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:55.948 19:33:42 -- nvmf/common.sh@421 -- # return 0 00:12:55.948 19:33:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:55.948 19:33:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:55.948 19:33:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:55.948 19:33:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:55.948 19:33:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:55.948 19:33:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:55.948 19:33:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:55.948 19:33:42 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:55.948 19:33:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:55.948 19:33:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:55.948 19:33:42 -- common/autotest_common.sh@10 -- # set +x 00:12:55.948 19:33:42 -- nvmf/common.sh@469 -- # nvmfpid=78825 00:12:55.948 19:33:42 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:55.948 19:33:42 -- nvmf/common.sh@470 -- # waitforlisten 78825 00:12:55.948 19:33:42 -- common/autotest_common.sh@829 -- # '[' -z 78825 ']' 00:12:56.207 19:33:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.207 19:33:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:56.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.207 19:33:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.207 19:33:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:56.207 19:33:42 -- common/autotest_common.sh@10 -- # set +x 00:12:56.207 [2024-12-15 19:33:42.901222] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:56.207 [2024-12-15 19:33:42.901328] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:56.207 [2024-12-15 19:33:43.039520] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:56.465 [2024-12-15 19:33:43.115089] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:56.465 [2024-12-15 19:33:43.115618] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:56.465 [2024-12-15 19:33:43.115671] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:56.465 [2024-12-15 19:33:43.115944] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:56.465 [2024-12-15 19:33:43.116228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:56.465 [2024-12-15 19:33:43.116389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:56.465 [2024-12-15 19:33:43.116454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:57.032 19:33:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:57.032 19:33:43 -- common/autotest_common.sh@862 -- # return 0 00:12:57.032 19:33:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:57.032 19:33:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:57.032 19:33:43 -- common/autotest_common.sh@10 -- # set +x 00:12:57.291 19:33:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:57.291 19:33:43 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:12:57.291 19:33:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.291 19:33:43 -- common/autotest_common.sh@10 -- # set +x 00:12:57.291 [2024-12-15 19:33:43.971616] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:57.291 19:33:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.291 19:33:43 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:57.291 19:33:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.291 19:33:43 -- common/autotest_common.sh@10 -- # set +x 00:12:57.291 Malloc0 00:12:57.291 19:33:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.291 19:33:44 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:57.291 19:33:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.291 19:33:44 -- common/autotest_common.sh@10 -- # set +x 00:12:57.291 Delay0 00:12:57.291 19:33:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.291 19:33:44 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:57.291 19:33:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.291 19:33:44 -- common/autotest_common.sh@10 -- # set +x 00:12:57.291 19:33:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.291 19:33:44 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:57.291 19:33:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.291 19:33:44 -- common/autotest_common.sh@10 -- # set +x 00:12:57.291 19:33:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.291 19:33:44 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:57.291 19:33:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.291 19:33:44 -- common/autotest_common.sh@10 -- # set +x 00:12:57.291 [2024-12-15 19:33:44.058595] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.291 19:33:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.291 19:33:44 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:57.291 19:33:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.291 19:33:44 -- common/autotest_common.sh@10 -- # set +x 00:12:57.291 19:33:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.291 19:33:44 -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:12:57.549 [2024-12-15 19:33:44.238488] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:59.450 Initializing NVMe Controllers 00:12:59.450 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:59.450 controller IO queue size 128 less than required 00:12:59.450 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:12:59.450 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:12:59.450 Initialization complete. Launching workers. 00:12:59.450 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 35684 00:12:59.450 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 35749, failed to submit 62 00:12:59.450 success 35684, unsuccess 65, failed 0 00:12:59.450 19:33:46 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:59.450 19:33:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.450 19:33:46 -- common/autotest_common.sh@10 -- # set +x 00:12:59.450 19:33:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.450 19:33:46 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:59.450 19:33:46 -- target/abort.sh@38 -- # nvmftestfini 00:12:59.450 19:33:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:59.450 19:33:46 -- nvmf/common.sh@116 -- # sync 00:12:59.450 19:33:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:59.450 19:33:46 -- nvmf/common.sh@119 -- # set +e 00:12:59.450 19:33:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:59.450 19:33:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:59.709 rmmod nvme_tcp 00:12:59.709 rmmod nvme_fabrics 00:12:59.709 rmmod nvme_keyring 00:12:59.709 19:33:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:59.709 19:33:46 -- nvmf/common.sh@123 -- # set -e 00:12:59.709 19:33:46 -- nvmf/common.sh@124 -- # return 0 00:12:59.709 19:33:46 -- nvmf/common.sh@477 -- # '[' -n 78825 ']' 00:12:59.709 19:33:46 -- nvmf/common.sh@478 -- # killprocess 78825 00:12:59.709 19:33:46 -- common/autotest_common.sh@936 -- # '[' -z 78825 ']' 00:12:59.709 19:33:46 -- common/autotest_common.sh@940 -- # kill -0 78825 00:12:59.709 19:33:46 -- common/autotest_common.sh@941 -- # uname 00:12:59.709 19:33:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:59.709 19:33:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78825 00:12:59.709 killing process with pid 78825 00:12:59.709 19:33:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:59.709 19:33:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:59.709 19:33:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78825' 00:12:59.709 19:33:46 -- common/autotest_common.sh@955 -- # kill 78825 00:12:59.709 19:33:46 -- common/autotest_common.sh@960 -- # wait 78825 00:12:59.968 19:33:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:59.968 19:33:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:59.968 19:33:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:59.968 19:33:46 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:59.968 19:33:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:59.968 19:33:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.968 19:33:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:59.968 19:33:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.968 19:33:46 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:59.968 00:12:59.968 real 0m4.446s 00:12:59.968 user 0m12.787s 00:12:59.968 sys 0m1.083s 00:12:59.968 19:33:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:59.968 ************************************ 00:12:59.968 END TEST nvmf_abort 00:12:59.968 ************************************ 00:12:59.968 19:33:46 -- common/autotest_common.sh@10 -- # set +x 00:12:59.968 19:33:46 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:59.968 19:33:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:59.968 19:33:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:59.968 19:33:46 -- common/autotest_common.sh@10 -- # set +x 00:12:59.968 ************************************ 00:12:59.968 START TEST nvmf_ns_hotplug_stress 00:12:59.968 ************************************ 00:12:59.968 19:33:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:00.227 * Looking for test storage... 00:13:00.227 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:00.227 19:33:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:00.227 19:33:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:00.227 19:33:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:00.227 19:33:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:00.227 19:33:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:00.227 19:33:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:00.227 19:33:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:00.227 19:33:46 -- scripts/common.sh@335 -- # IFS=.-: 00:13:00.227 19:33:46 -- scripts/common.sh@335 -- # read -ra ver1 00:13:00.227 19:33:46 -- scripts/common.sh@336 -- # IFS=.-: 00:13:00.227 19:33:46 -- scripts/common.sh@336 -- # read -ra ver2 00:13:00.227 19:33:46 -- scripts/common.sh@337 -- # local 'op=<' 00:13:00.227 19:33:46 -- scripts/common.sh@339 -- # ver1_l=2 00:13:00.227 19:33:46 -- scripts/common.sh@340 -- # ver2_l=1 00:13:00.227 19:33:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:00.227 19:33:46 -- scripts/common.sh@343 -- # case "$op" in 00:13:00.227 19:33:46 -- scripts/common.sh@344 -- # : 1 00:13:00.227 19:33:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:00.227 19:33:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:00.227 19:33:46 -- scripts/common.sh@364 -- # decimal 1 00:13:00.227 19:33:46 -- scripts/common.sh@352 -- # local d=1 00:13:00.227 19:33:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:00.227 19:33:46 -- scripts/common.sh@354 -- # echo 1 00:13:00.227 19:33:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:00.227 19:33:46 -- scripts/common.sh@365 -- # decimal 2 00:13:00.227 19:33:46 -- scripts/common.sh@352 -- # local d=2 00:13:00.227 19:33:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:00.227 19:33:46 -- scripts/common.sh@354 -- # echo 2 00:13:00.227 19:33:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:00.227 19:33:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:00.227 19:33:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:00.227 19:33:46 -- scripts/common.sh@367 -- # return 0 00:13:00.227 19:33:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:00.227 19:33:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:00.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.227 --rc genhtml_branch_coverage=1 00:13:00.227 --rc genhtml_function_coverage=1 00:13:00.227 --rc genhtml_legend=1 00:13:00.227 --rc geninfo_all_blocks=1 00:13:00.227 --rc geninfo_unexecuted_blocks=1 00:13:00.227 00:13:00.227 ' 00:13:00.227 19:33:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:00.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.227 --rc genhtml_branch_coverage=1 00:13:00.227 --rc genhtml_function_coverage=1 00:13:00.227 --rc genhtml_legend=1 00:13:00.227 --rc geninfo_all_blocks=1 00:13:00.227 --rc geninfo_unexecuted_blocks=1 00:13:00.227 00:13:00.227 ' 00:13:00.227 19:33:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:00.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.227 --rc genhtml_branch_coverage=1 00:13:00.227 --rc genhtml_function_coverage=1 00:13:00.227 --rc genhtml_legend=1 00:13:00.227 --rc geninfo_all_blocks=1 00:13:00.227 --rc geninfo_unexecuted_blocks=1 00:13:00.227 00:13:00.227 ' 00:13:00.227 19:33:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:00.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.227 --rc genhtml_branch_coverage=1 00:13:00.227 --rc genhtml_function_coverage=1 00:13:00.227 --rc genhtml_legend=1 00:13:00.227 --rc geninfo_all_blocks=1 00:13:00.227 --rc geninfo_unexecuted_blocks=1 00:13:00.227 00:13:00.227 ' 00:13:00.227 19:33:46 -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:00.227 19:33:46 -- nvmf/common.sh@7 -- # uname -s 00:13:00.227 19:33:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:00.227 19:33:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:00.227 19:33:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:00.227 19:33:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:00.227 19:33:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:00.227 19:33:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:00.227 19:33:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:00.227 19:33:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:00.227 19:33:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:00.227 19:33:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:00.227 19:33:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:13:00.227 19:33:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:13:00.227 19:33:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:00.227 19:33:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:00.227 19:33:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:00.227 19:33:46 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:00.227 19:33:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.227 19:33:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.227 19:33:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.227 19:33:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.227 19:33:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.227 19:33:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.227 19:33:46 -- paths/export.sh@5 -- # export PATH 00:13:00.227 19:33:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.227 19:33:46 -- nvmf/common.sh@46 -- # : 0 00:13:00.227 19:33:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:00.227 19:33:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:00.227 19:33:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:00.227 19:33:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:00.227 19:33:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:00.228 19:33:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:00.228 19:33:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:00.228 19:33:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:00.228 19:33:46 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:00.228 19:33:46 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:00.228 19:33:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:00.228 19:33:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:00.228 19:33:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:00.228 19:33:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:00.228 19:33:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:00.228 19:33:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.228 19:33:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:00.228 19:33:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.228 19:33:47 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:00.228 19:33:47 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:00.228 19:33:47 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:00.228 19:33:47 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:00.228 19:33:47 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:00.228 19:33:47 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:00.228 19:33:47 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:00.228 19:33:47 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:00.228 19:33:47 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:00.228 19:33:47 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:00.228 19:33:47 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:00.228 19:33:47 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:00.228 19:33:47 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:00.228 19:33:47 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:00.228 19:33:47 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:00.228 19:33:47 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:00.228 19:33:47 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:00.228 19:33:47 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:00.228 19:33:47 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:00.228 19:33:47 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:00.228 Cannot find device "nvmf_tgt_br" 00:13:00.228 19:33:47 -- nvmf/common.sh@154 -- # true 00:13:00.228 19:33:47 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:00.228 Cannot find device "nvmf_tgt_br2" 00:13:00.228 19:33:47 -- nvmf/common.sh@155 -- # true 00:13:00.228 19:33:47 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:00.228 19:33:47 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:00.228 Cannot find device "nvmf_tgt_br" 00:13:00.228 19:33:47 -- nvmf/common.sh@157 -- # true 00:13:00.228 19:33:47 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:00.228 Cannot find device "nvmf_tgt_br2" 00:13:00.228 19:33:47 -- nvmf/common.sh@158 -- # true 00:13:00.228 19:33:47 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:00.228 19:33:47 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:00.486 19:33:47 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:00.486 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:00.486 19:33:47 -- nvmf/common.sh@161 -- # true 00:13:00.486 19:33:47 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:00.486 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:00.486 19:33:47 -- nvmf/common.sh@162 -- # true 00:13:00.486 19:33:47 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:00.486 19:33:47 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:00.486 19:33:47 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:00.486 19:33:47 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:00.486 19:33:47 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:00.486 19:33:47 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:00.486 19:33:47 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:00.486 19:33:47 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:00.486 19:33:47 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:00.486 19:33:47 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:00.486 19:33:47 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:00.486 19:33:47 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:00.486 19:33:47 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:00.486 19:33:47 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:00.486 19:33:47 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:00.486 19:33:47 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:00.486 19:33:47 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:00.486 19:33:47 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:00.486 19:33:47 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:00.486 19:33:47 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:00.486 19:33:47 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:00.486 19:33:47 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:00.486 19:33:47 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:00.486 19:33:47 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:00.486 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:00.487 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:13:00.487 00:13:00.487 --- 10.0.0.2 ping statistics --- 00:13:00.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.487 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:13:00.487 19:33:47 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:00.487 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:00.487 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:13:00.487 00:13:00.487 --- 10.0.0.3 ping statistics --- 00:13:00.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.487 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:13:00.487 19:33:47 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:00.487 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:00.487 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:13:00.487 00:13:00.487 --- 10.0.0.1 ping statistics --- 00:13:00.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.487 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:13:00.487 19:33:47 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:00.487 19:33:47 -- nvmf/common.sh@421 -- # return 0 00:13:00.487 19:33:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:00.487 19:33:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:00.487 19:33:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:00.487 19:33:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:00.487 19:33:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:00.487 19:33:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:00.487 19:33:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:00.487 19:33:47 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:00.487 19:33:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:00.487 19:33:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:00.487 19:33:47 -- common/autotest_common.sh@10 -- # set +x 00:13:00.487 19:33:47 -- nvmf/common.sh@469 -- # nvmfpid=79100 00:13:00.487 19:33:47 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:00.487 19:33:47 -- nvmf/common.sh@470 -- # waitforlisten 79100 00:13:00.487 19:33:47 -- common/autotest_common.sh@829 -- # '[' -z 79100 ']' 00:13:00.487 19:33:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.487 19:33:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:00.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.487 19:33:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.487 19:33:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:00.487 19:33:47 -- common/autotest_common.sh@10 -- # set +x 00:13:00.745 [2024-12-15 19:33:47.401113] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:00.745 [2024-12-15 19:33:47.401218] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:00.745 [2024-12-15 19:33:47.541645] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:00.745 [2024-12-15 19:33:47.613465] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:00.745 [2024-12-15 19:33:47.613615] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:00.745 [2024-12-15 19:33:47.613627] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:00.745 [2024-12-15 19:33:47.613636] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:00.745 [2024-12-15 19:33:47.613998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:00.745 [2024-12-15 19:33:47.614281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:00.745 [2024-12-15 19:33:47.614290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:01.681 19:33:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:01.681 19:33:48 -- common/autotest_common.sh@862 -- # return 0 00:13:01.681 19:33:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:01.681 19:33:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:01.681 19:33:48 -- common/autotest_common.sh@10 -- # set +x 00:13:01.681 19:33:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:01.681 19:33:48 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:01.681 19:33:48 -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:01.939 [2024-12-15 19:33:48.647612] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:01.939 19:33:48 -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:02.200 19:33:48 -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:02.474 [2024-12-15 19:33:49.168272] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:02.474 19:33:49 -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:02.743 19:33:49 -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:02.743 Malloc0 00:13:03.002 19:33:49 -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:03.002 Delay0 00:13:03.002 19:33:49 -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:03.260 19:33:50 -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:03.519 NULL1 00:13:03.519 19:33:50 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:03.776 19:33:50 -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:03.776 19:33:50 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=79231 00:13:03.776 19:33:50 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79231 00:13:03.776 19:33:50 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.148 Read completed with error (sct=0, sc=11) 00:13:05.148 19:33:51 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:05.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:05.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:05.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:05.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:05.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:05.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:05.407 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:05.407 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:05.407 19:33:52 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:05.407 19:33:52 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:05.665 true 00:13:05.665 19:33:52 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79231 00:13:05.665 19:33:52 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:06.602 19:33:53 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:06.602 19:33:53 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:06.602 19:33:53 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:06.861 true 00:13:06.861 19:33:53 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79231 00:13:06.861 19:33:53 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.120 19:33:53 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:07.378 19:33:54 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:07.378 19:33:54 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:07.637 true 00:13:07.637 19:33:54 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79231 00:13:07.637 19:33:54 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.573 19:33:55 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:08.573 19:33:55 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:08.573 19:33:55 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:08.832 true 00:13:08.832 19:33:55 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79231 00:13:08.832 19:33:55 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.090 19:33:55 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:09.349 19:33:56 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:09.349 19:33:56 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:09.608 true 00:13:09.608 19:33:56 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79231 00:13:09.608 19:33:56 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.543 19:33:57 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:10.543 19:33:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:10.543 19:33:57 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:10.802 true 00:13:10.802 19:33:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79231 00:13:10.802 19:33:57 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.060 19:33:57 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:11.319 19:33:58 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:11.319 19:33:58 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:11.578 true 00:13:11.578 19:33:58 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79231 00:13:11.578 19:33:58 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.514 19:33:59 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:12.514 19:33:59 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:12.514 19:33:59 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:12.772 true 00:13:12.772 19:33:59 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79231 00:13:12.772 19:33:59 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.031 19:33:59 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:13.289 19:34:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:13.289 19:34:00 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:13.548 true 00:13:13.548 19:34:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79231 00:13:13.548 19:34:00 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.484 19:34:01 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:14.742 19:34:01 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:14.742 19:34:01 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:14.742 true 00:13:14.742 19:34:01 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79231 00:13:14.742 19:34:01 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.001 19:34:01 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:15.259 19:34:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:15.259 19:34:02 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:15.517 true 00:13:15.517 19:34:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79231 00:13:15.517 19:34:02 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.453 19:34:03 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:16.712 19:34:03 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:16.712 19:34:03 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:16.971 true 00:13:16.971 19:34:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79231 00:13:16.971 19:34:03 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.229 19:34:03 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:17.488 19:34:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:17.488 19:34:04 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:17.488 true 00:13:17.488 19:34:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79231 00:13:17.488 19:34:04 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.424 19:34:05 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:18.682 19:34:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:18.682 19:34:05 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:18.940 true 00:13:18.940 19:34:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79231 00:13:18.940 19:34:05 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.198 19:34:05 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:19.457 19:34:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:19.457 19:34:06 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:19.457 true 00:13:19.457 19:34:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79231 00:13:19.457 19:34:06 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.393 19:34:07 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:20.651 19:34:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:20.651 19:34:07 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:20.909 true 00:13:20.910 19:34:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79231 00:13:20.910 19:34:07 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.168 19:34:07 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:21.426 19:34:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:21.426 19:34:08 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:21.684 true 00:13:21.684 19:34:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79231 00:13:21.684 19:34:08 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.666 19:34:09 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:22.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:22.666 19:34:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:22.666 19:34:09 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:22.924 true 00:13:22.924 19:34:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79231 00:13:22.924 19:34:09 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.183 19:34:09 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:23.442 19:34:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:23.442 19:34:10 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:23.700 true 00:13:23.700 19:34:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79231 00:13:23.700 19:34:10 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:24.636 19:34:11 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:24.636 19:34:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:24.636 19:34:11 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:24.895 true 00:13:24.895 19:34:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79231 00:13:24.895 19:34:11 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.153 19:34:11 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:25.412 19:34:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:25.412 19:34:12 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:25.671 true 00:13:25.671 19:34:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79231 00:13:25.671 19:34:12 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.607 19:34:13 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:26.607 19:34:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:26.607 19:34:13 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:26.866 true 00:13:26.866 19:34:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79231 00:13:26.866 19:34:13 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.125 19:34:13 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:27.384 19:34:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:27.384 19:34:14 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:27.643 true 00:13:27.643 19:34:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79231 00:13:27.643 19:34:14 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.579 19:34:15 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:28.837 19:34:15 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:28.837 19:34:15 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:29.095 true 00:13:29.095 19:34:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79231 00:13:29.095 19:34:15 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.354 19:34:15 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.612 19:34:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:29.612 19:34:16 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:29.612 true 00:13:29.872 19:34:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79231 00:13:29.872 19:34:16 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.872 19:34:16 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:30.131 19:34:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:30.131 19:34:16 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:30.390 true 00:13:30.390 19:34:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79231 00:13:30.390 19:34:17 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.768 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.768 19:34:18 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.768 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.768 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.768 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.768 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.768 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.768 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.768 19:34:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:31.768 19:34:18 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:32.027 true 00:13:32.027 19:34:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79231 00:13:32.027 19:34:18 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.962 19:34:19 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.962 19:34:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:32.962 19:34:19 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:33.221 true 00:13:33.221 19:34:20 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79231 00:13:33.221 19:34:20 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.480 19:34:20 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.738 19:34:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:33.738 19:34:20 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:33.997 true 00:13:33.997 19:34:20 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79231 00:13:33.997 19:34:20 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.997 Initializing NVMe Controllers 00:13:33.997 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:33.997 Controller IO queue size 128, less than required. 00:13:33.997 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:33.997 Controller IO queue size 128, less than required. 00:13:33.997 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:33.997 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:33.997 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:33.997 Initialization complete. Launching workers. 00:13:33.997 ======================================================== 00:13:33.997 Latency(us) 00:13:33.997 Device Information : IOPS MiB/s Average min max 00:13:33.997 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 614.36 0.30 113741.49 2516.36 1104737.10 00:13:33.997 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 13856.12 6.77 9237.43 1524.61 580834.34 00:13:33.997 ======================================================== 00:13:33.997 Total : 14470.48 7.07 13674.24 1524.61 1104737.10 00:13:33.997 00:13:34.256 19:34:20 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.514 19:34:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:13:34.514 19:34:21 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:13:34.514 true 00:13:34.773 19:34:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79231 00:13:34.773 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (79231) - No such process 00:13:34.773 19:34:21 -- target/ns_hotplug_stress.sh@53 -- # wait 79231 00:13:34.773 19:34:21 -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.773 19:34:21 -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:35.031 19:34:21 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:35.031 19:34:21 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:35.031 19:34:21 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:35.031 19:34:21 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:35.031 19:34:21 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:35.289 null0 00:13:35.289 19:34:22 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:35.289 19:34:22 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:35.289 19:34:22 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:35.548 null1 00:13:35.548 19:34:22 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:35.548 19:34:22 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:35.548 19:34:22 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:35.806 null2 00:13:35.806 19:34:22 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:35.806 19:34:22 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:35.806 19:34:22 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:36.065 null3 00:13:36.065 19:34:22 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:36.065 19:34:22 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:36.065 19:34:22 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:36.324 null4 00:13:36.324 19:34:23 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:36.324 19:34:23 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:36.324 19:34:23 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:36.324 null5 00:13:36.582 19:34:23 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:36.582 19:34:23 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:36.582 19:34:23 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:36.582 null6 00:13:36.582 19:34:23 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:36.582 19:34:23 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:36.582 19:34:23 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:36.841 null7 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:36.841 19:34:23 -- target/ns_hotplug_stress.sh@66 -- # wait 80275 80277 80279 80281 80283 80284 80286 80288 00:13:37.100 19:34:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:37.100 19:34:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:37.100 19:34:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.100 19:34:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:37.100 19:34:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:37.100 19:34:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:37.358 19:34:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:37.358 19:34:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:37.358 19:34:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.358 19:34:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.358 19:34:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:37.358 19:34:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.358 19:34:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.358 19:34:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:37.358 19:34:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.358 19:34:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.358 19:34:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:37.358 19:34:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.358 19:34:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.358 19:34:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:37.358 19:34:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.358 19:34:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.358 19:34:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:37.617 19:34:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.617 19:34:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.617 19:34:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:37.617 19:34:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.617 19:34:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.617 19:34:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:37.617 19:34:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.617 19:34:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.617 19:34:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:37.617 19:34:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.617 19:34:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:37.617 19:34:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:37.617 19:34:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:37.875 19:34:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:37.875 19:34:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:37.875 19:34:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:37.875 19:34:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:37.875 19:34:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.875 19:34:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.876 19:34:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:37.876 19:34:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.876 19:34:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.876 19:34:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:37.876 19:34:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.876 19:34:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.876 19:34:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:38.134 19:34:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.134 19:34:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.134 19:34:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:38.134 19:34:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.134 19:34:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.134 19:34:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:38.134 19:34:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.134 19:34:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.134 19:34:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:38.134 19:34:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.134 19:34:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.134 19:34:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:38.134 19:34:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.134 19:34:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.134 19:34:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:38.134 19:34:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:38.392 19:34:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.392 19:34:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:38.392 19:34:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:38.392 19:34:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:38.392 19:34:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:38.392 19:34:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:38.393 19:34:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:38.393 19:34:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.393 19:34:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.393 19:34:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:38.651 19:34:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.651 19:34:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.651 19:34:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:38.651 19:34:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.651 19:34:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.651 19:34:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:38.651 19:34:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.651 19:34:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.651 19:34:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:38.651 19:34:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.651 19:34:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.651 19:34:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:38.651 19:34:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.651 19:34:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.651 19:34:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:38.651 19:34:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.651 19:34:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.651 19:34:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:38.651 19:34:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:38.910 19:34:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.910 19:34:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.910 19:34:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:38.910 19:34:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.910 19:34:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:38.910 19:34:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:38.910 19:34:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:38.910 19:34:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:39.168 19:34:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.168 19:34:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.168 19:34:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:39.168 19:34:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:39.168 19:34:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:39.168 19:34:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.168 19:34:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.168 19:34:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:39.168 19:34:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.168 19:34:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.168 19:34:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:39.168 19:34:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.168 19:34:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.168 19:34:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:39.168 19:34:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.168 19:34:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.168 19:34:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:39.168 19:34:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.168 19:34:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.168 19:34:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:39.427 19:34:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:39.427 19:34:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.427 19:34:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.427 19:34:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:39.427 19:34:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.427 19:34:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.427 19:34:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:39.427 19:34:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.427 19:34:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:39.427 19:34:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:39.427 19:34:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:39.686 19:34:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:39.686 19:34:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.686 19:34:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.686 19:34:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:39.686 19:34:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:39.686 19:34:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:39.686 19:34:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.686 19:34:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.686 19:34:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:39.686 19:34:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.686 19:34:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.686 19:34:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:39.686 19:34:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.686 19:34:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.686 19:34:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:39.686 19:34:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.686 19:34:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.686 19:34:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:39.945 19:34:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.945 19:34:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.945 19:34:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:39.945 19:34:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.945 19:34:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.945 19:34:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:39.945 19:34:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:39.945 19:34:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.945 19:34:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.945 19:34:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:39.945 19:34:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.945 19:34:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:39.945 19:34:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:39.945 19:34:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:40.204 19:34:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:40.204 19:34:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:40.204 19:34:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.204 19:34:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.204 19:34:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:40.204 19:34:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:40.204 19:34:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.204 19:34:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.204 19:34:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:40.204 19:34:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.204 19:34:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.204 19:34:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:40.204 19:34:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.204 19:34:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.204 19:34:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:40.462 19:34:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.462 19:34:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.462 19:34:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:40.462 19:34:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.462 19:34:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.462 19:34:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:40.462 19:34:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.462 19:34:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.462 19:34:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:40.462 19:34:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.462 19:34:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.462 19:34:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:40.462 19:34:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.462 19:34:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:40.462 19:34:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:40.721 19:34:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:40.721 19:34:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:40.721 19:34:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:40.721 19:34:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:40.721 19:34:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.721 19:34:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.721 19:34:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:40.721 19:34:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:40.721 19:34:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.721 19:34:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.721 19:34:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:40.721 19:34:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.721 19:34:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.721 19:34:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:40.985 19:34:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.985 19:34:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.985 19:34:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:40.985 19:34:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.985 19:34:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.985 19:34:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:40.985 19:34:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.985 19:34:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.985 19:34:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:40.985 19:34:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.985 19:34:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.985 19:34:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:40.985 19:34:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:40.985 19:34:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.985 19:34:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.985 19:34:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:40.985 19:34:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:40.985 19:34:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.268 19:34:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:41.268 19:34:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:41.268 19:34:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:41.268 19:34:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:41.268 19:34:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.268 19:34:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.268 19:34:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:41.268 19:34:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.268 19:34:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.268 19:34:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:41.539 19:34:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:41.539 19:34:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.539 19:34:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.539 19:34:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:41.539 19:34:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.539 19:34:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.539 19:34:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:41.539 19:34:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.539 19:34:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.539 19:34:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:41.539 19:34:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.539 19:34:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.539 19:34:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:41.539 19:34:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.539 19:34:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.539 19:34:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:41.539 19:34:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:41.539 19:34:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.798 19:34:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:41.798 19:34:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:41.798 19:34:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.798 19:34:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.798 19:34:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:41.798 19:34:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:41.798 19:34:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:41.798 19:34:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.798 19:34:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.798 19:34:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:41.798 19:34:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:41.798 19:34:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.798 19:34:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.798 19:34:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:42.057 19:34:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.057 19:34:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.057 19:34:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:42.057 19:34:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.057 19:34:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.057 19:34:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:42.057 19:34:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:42.057 19:34:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.057 19:34:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.057 19:34:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:42.057 19:34:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.057 19:34:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.057 19:34:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:42.057 19:34:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.057 19:34:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.057 19:34:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:42.057 19:34:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.057 19:34:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:42.316 19:34:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:42.316 19:34:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:42.316 19:34:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:42.316 19:34:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:42.316 19:34:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.316 19:34:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.316 19:34:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:42.316 19:34:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:42.316 19:34:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.316 19:34:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.575 19:34:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.575 19:34:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.575 19:34:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.575 19:34:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.575 19:34:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.575 19:34:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.575 19:34:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.575 19:34:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.575 19:34:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:42.575 19:34:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.575 19:34:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.575 19:34:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.575 19:34:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.834 19:34:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.834 19:34:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.834 19:34:29 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:42.834 19:34:29 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:42.834 19:34:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:42.834 19:34:29 -- nvmf/common.sh@116 -- # sync 00:13:42.834 19:34:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:42.834 19:34:29 -- nvmf/common.sh@119 -- # set +e 00:13:42.834 19:34:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:42.834 19:34:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:42.834 rmmod nvme_tcp 00:13:42.834 rmmod nvme_fabrics 00:13:42.834 rmmod nvme_keyring 00:13:42.834 19:34:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:42.834 19:34:29 -- nvmf/common.sh@123 -- # set -e 00:13:42.834 19:34:29 -- nvmf/common.sh@124 -- # return 0 00:13:42.834 19:34:29 -- nvmf/common.sh@477 -- # '[' -n 79100 ']' 00:13:42.834 19:34:29 -- nvmf/common.sh@478 -- # killprocess 79100 00:13:42.834 19:34:29 -- common/autotest_common.sh@936 -- # '[' -z 79100 ']' 00:13:42.834 19:34:29 -- common/autotest_common.sh@940 -- # kill -0 79100 00:13:42.834 19:34:29 -- common/autotest_common.sh@941 -- # uname 00:13:42.834 19:34:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:42.834 19:34:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79100 00:13:42.834 19:34:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:42.834 19:34:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:42.834 killing process with pid 79100 00:13:42.834 19:34:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79100' 00:13:42.834 19:34:29 -- common/autotest_common.sh@955 -- # kill 79100 00:13:42.834 19:34:29 -- common/autotest_common.sh@960 -- # wait 79100 00:13:43.402 19:34:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:43.402 19:34:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:43.402 19:34:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:43.402 19:34:29 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:43.402 19:34:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:43.402 19:34:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.402 19:34:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:43.402 19:34:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.402 19:34:30 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:43.402 00:13:43.402 real 0m43.218s 00:13:43.402 user 3m28.008s 00:13:43.402 sys 0m13.031s 00:13:43.402 19:34:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:43.402 19:34:30 -- common/autotest_common.sh@10 -- # set +x 00:13:43.402 ************************************ 00:13:43.402 END TEST nvmf_ns_hotplug_stress 00:13:43.402 ************************************ 00:13:43.402 19:34:30 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:43.402 19:34:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:43.402 19:34:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:43.402 19:34:30 -- common/autotest_common.sh@10 -- # set +x 00:13:43.402 ************************************ 00:13:43.402 START TEST nvmf_connect_stress 00:13:43.402 ************************************ 00:13:43.402 19:34:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:43.402 * Looking for test storage... 00:13:43.402 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:43.402 19:34:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:43.402 19:34:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:43.402 19:34:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:43.402 19:34:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:43.402 19:34:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:43.402 19:34:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:43.402 19:34:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:43.402 19:34:30 -- scripts/common.sh@335 -- # IFS=.-: 00:13:43.402 19:34:30 -- scripts/common.sh@335 -- # read -ra ver1 00:13:43.402 19:34:30 -- scripts/common.sh@336 -- # IFS=.-: 00:13:43.402 19:34:30 -- scripts/common.sh@336 -- # read -ra ver2 00:13:43.402 19:34:30 -- scripts/common.sh@337 -- # local 'op=<' 00:13:43.402 19:34:30 -- scripts/common.sh@339 -- # ver1_l=2 00:13:43.402 19:34:30 -- scripts/common.sh@340 -- # ver2_l=1 00:13:43.402 19:34:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:43.402 19:34:30 -- scripts/common.sh@343 -- # case "$op" in 00:13:43.402 19:34:30 -- scripts/common.sh@344 -- # : 1 00:13:43.402 19:34:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:43.402 19:34:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:43.402 19:34:30 -- scripts/common.sh@364 -- # decimal 1 00:13:43.402 19:34:30 -- scripts/common.sh@352 -- # local d=1 00:13:43.402 19:34:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:43.402 19:34:30 -- scripts/common.sh@354 -- # echo 1 00:13:43.402 19:34:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:43.402 19:34:30 -- scripts/common.sh@365 -- # decimal 2 00:13:43.402 19:34:30 -- scripts/common.sh@352 -- # local d=2 00:13:43.402 19:34:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:43.402 19:34:30 -- scripts/common.sh@354 -- # echo 2 00:13:43.402 19:34:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:43.402 19:34:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:43.402 19:34:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:43.402 19:34:30 -- scripts/common.sh@367 -- # return 0 00:13:43.402 19:34:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:43.402 19:34:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:43.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.402 --rc genhtml_branch_coverage=1 00:13:43.402 --rc genhtml_function_coverage=1 00:13:43.402 --rc genhtml_legend=1 00:13:43.402 --rc geninfo_all_blocks=1 00:13:43.402 --rc geninfo_unexecuted_blocks=1 00:13:43.403 00:13:43.403 ' 00:13:43.403 19:34:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:43.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.403 --rc genhtml_branch_coverage=1 00:13:43.403 --rc genhtml_function_coverage=1 00:13:43.403 --rc genhtml_legend=1 00:13:43.403 --rc geninfo_all_blocks=1 00:13:43.403 --rc geninfo_unexecuted_blocks=1 00:13:43.403 00:13:43.403 ' 00:13:43.403 19:34:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:43.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.403 --rc genhtml_branch_coverage=1 00:13:43.403 --rc genhtml_function_coverage=1 00:13:43.403 --rc genhtml_legend=1 00:13:43.403 --rc geninfo_all_blocks=1 00:13:43.403 --rc geninfo_unexecuted_blocks=1 00:13:43.403 00:13:43.403 ' 00:13:43.403 19:34:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:43.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.403 --rc genhtml_branch_coverage=1 00:13:43.403 --rc genhtml_function_coverage=1 00:13:43.403 --rc genhtml_legend=1 00:13:43.403 --rc geninfo_all_blocks=1 00:13:43.403 --rc geninfo_unexecuted_blocks=1 00:13:43.403 00:13:43.403 ' 00:13:43.403 19:34:30 -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:43.403 19:34:30 -- nvmf/common.sh@7 -- # uname -s 00:13:43.403 19:34:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:43.403 19:34:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:43.403 19:34:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:43.403 19:34:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:43.403 19:34:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:43.403 19:34:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:43.403 19:34:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:43.403 19:34:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:43.403 19:34:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:43.403 19:34:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:43.403 19:34:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:13:43.403 19:34:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:13:43.403 19:34:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:43.403 19:34:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:43.403 19:34:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:43.403 19:34:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:43.403 19:34:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:43.403 19:34:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:43.403 19:34:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:43.403 19:34:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.403 19:34:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.403 19:34:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.403 19:34:30 -- paths/export.sh@5 -- # export PATH 00:13:43.403 19:34:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.403 19:34:30 -- nvmf/common.sh@46 -- # : 0 00:13:43.403 19:34:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:43.403 19:34:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:43.403 19:34:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:43.403 19:34:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:43.403 19:34:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:43.403 19:34:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:43.403 19:34:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:43.403 19:34:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:43.403 19:34:30 -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:43.403 19:34:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:43.403 19:34:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:43.403 19:34:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:43.403 19:34:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:43.403 19:34:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:43.403 19:34:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.403 19:34:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:43.403 19:34:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.403 19:34:30 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:43.403 19:34:30 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:43.403 19:34:30 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:43.403 19:34:30 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:43.403 19:34:30 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:43.403 19:34:30 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:43.403 19:34:30 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:43.403 19:34:30 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:43.403 19:34:30 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:43.403 19:34:30 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:43.403 19:34:30 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:43.403 19:34:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:43.403 19:34:30 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:43.403 19:34:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:43.403 19:34:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:43.403 19:34:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:43.403 19:34:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:43.403 19:34:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:43.403 19:34:30 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:43.662 19:34:30 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:43.662 Cannot find device "nvmf_tgt_br" 00:13:43.662 19:34:30 -- nvmf/common.sh@154 -- # true 00:13:43.662 19:34:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:43.662 Cannot find device "nvmf_tgt_br2" 00:13:43.662 19:34:30 -- nvmf/common.sh@155 -- # true 00:13:43.662 19:34:30 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:43.662 19:34:30 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:43.662 Cannot find device "nvmf_tgt_br" 00:13:43.662 19:34:30 -- nvmf/common.sh@157 -- # true 00:13:43.662 19:34:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:43.662 Cannot find device "nvmf_tgt_br2" 00:13:43.662 19:34:30 -- nvmf/common.sh@158 -- # true 00:13:43.662 19:34:30 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:43.662 19:34:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:43.662 19:34:30 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:43.662 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:43.662 19:34:30 -- nvmf/common.sh@161 -- # true 00:13:43.662 19:34:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:43.662 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:43.662 19:34:30 -- nvmf/common.sh@162 -- # true 00:13:43.662 19:34:30 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:43.662 19:34:30 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:43.662 19:34:30 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:43.662 19:34:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:43.662 19:34:30 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:43.662 19:34:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:43.662 19:34:30 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:43.663 19:34:30 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:43.663 19:34:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:43.663 19:34:30 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:43.663 19:34:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:43.663 19:34:30 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:43.663 19:34:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:43.663 19:34:30 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:43.663 19:34:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:43.663 19:34:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:43.663 19:34:30 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:43.663 19:34:30 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:43.663 19:34:30 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:43.663 19:34:30 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:43.663 19:34:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:43.922 19:34:30 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:43.922 19:34:30 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:43.922 19:34:30 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:43.922 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:43.922 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:13:43.922 00:13:43.922 --- 10.0.0.2 ping statistics --- 00:13:43.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.922 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:13:43.922 19:34:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:43.922 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:43.922 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:13:43.922 00:13:43.922 --- 10.0.0.3 ping statistics --- 00:13:43.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.922 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:13:43.922 19:34:30 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:43.922 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:43.922 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:13:43.922 00:13:43.922 --- 10.0.0.1 ping statistics --- 00:13:43.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.922 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:13:43.922 19:34:30 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:43.922 19:34:30 -- nvmf/common.sh@421 -- # return 0 00:13:43.922 19:34:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:43.922 19:34:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:43.922 19:34:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:43.922 19:34:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:43.922 19:34:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:43.922 19:34:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:43.922 19:34:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:43.922 19:34:30 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:43.922 19:34:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:43.922 19:34:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:43.922 19:34:30 -- common/autotest_common.sh@10 -- # set +x 00:13:43.922 19:34:30 -- nvmf/common.sh@469 -- # nvmfpid=81616 00:13:43.922 19:34:30 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:43.922 19:34:30 -- nvmf/common.sh@470 -- # waitforlisten 81616 00:13:43.922 19:34:30 -- common/autotest_common.sh@829 -- # '[' -z 81616 ']' 00:13:43.922 19:34:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.922 19:34:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:43.922 19:34:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.922 19:34:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:43.922 19:34:30 -- common/autotest_common.sh@10 -- # set +x 00:13:43.922 [2024-12-15 19:34:30.694954] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:43.922 [2024-12-15 19:34:30.695049] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:44.181 [2024-12-15 19:34:30.834820] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:44.181 [2024-12-15 19:34:30.917184] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:44.181 [2024-12-15 19:34:30.917383] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:44.181 [2024-12-15 19:34:30.917398] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:44.181 [2024-12-15 19:34:30.917407] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:44.181 [2024-12-15 19:34:30.917592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:44.181 [2024-12-15 19:34:30.918446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:44.181 [2024-12-15 19:34:30.918466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.117 19:34:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:45.117 19:34:31 -- common/autotest_common.sh@862 -- # return 0 00:13:45.117 19:34:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:45.117 19:34:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:45.117 19:34:31 -- common/autotest_common.sh@10 -- # set +x 00:13:45.117 19:34:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:45.117 19:34:31 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:45.117 19:34:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.117 19:34:31 -- common/autotest_common.sh@10 -- # set +x 00:13:45.117 [2024-12-15 19:34:31.783629] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:45.117 19:34:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.117 19:34:31 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:45.117 19:34:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.117 19:34:31 -- common/autotest_common.sh@10 -- # set +x 00:13:45.117 19:34:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.117 19:34:31 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:45.117 19:34:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.117 19:34:31 -- common/autotest_common.sh@10 -- # set +x 00:13:45.117 [2024-12-15 19:34:31.803764] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:45.117 19:34:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.117 19:34:31 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:45.117 19:34:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.117 19:34:31 -- common/autotest_common.sh@10 -- # set +x 00:13:45.117 NULL1 00:13:45.117 19:34:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.117 19:34:31 -- target/connect_stress.sh@21 -- # PERF_PID=81668 00:13:45.117 19:34:31 -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:45.118 19:34:31 -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:45.118 19:34:31 -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:45.118 19:34:31 -- target/connect_stress.sh@27 -- # seq 1 20 00:13:45.118 19:34:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.118 19:34:31 -- target/connect_stress.sh@28 -- # cat 00:13:45.118 19:34:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.118 19:34:31 -- target/connect_stress.sh@28 -- # cat 00:13:45.118 19:34:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.118 19:34:31 -- target/connect_stress.sh@28 -- # cat 00:13:45.118 19:34:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.118 19:34:31 -- target/connect_stress.sh@28 -- # cat 00:13:45.118 19:34:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.118 19:34:31 -- target/connect_stress.sh@28 -- # cat 00:13:45.118 19:34:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.118 19:34:31 -- target/connect_stress.sh@28 -- # cat 00:13:45.118 19:34:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.118 19:34:31 -- target/connect_stress.sh@28 -- # cat 00:13:45.118 19:34:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.118 19:34:31 -- target/connect_stress.sh@28 -- # cat 00:13:45.118 19:34:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.118 19:34:31 -- target/connect_stress.sh@28 -- # cat 00:13:45.118 19:34:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.118 19:34:31 -- target/connect_stress.sh@28 -- # cat 00:13:45.118 19:34:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.118 19:34:31 -- target/connect_stress.sh@28 -- # cat 00:13:45.118 19:34:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.118 19:34:31 -- target/connect_stress.sh@28 -- # cat 00:13:45.118 19:34:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.118 19:34:31 -- target/connect_stress.sh@28 -- # cat 00:13:45.118 19:34:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.118 19:34:31 -- target/connect_stress.sh@28 -- # cat 00:13:45.118 19:34:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.118 19:34:31 -- target/connect_stress.sh@28 -- # cat 00:13:45.118 19:34:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.118 19:34:31 -- target/connect_stress.sh@28 -- # cat 00:13:45.118 19:34:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.118 19:34:31 -- target/connect_stress.sh@28 -- # cat 00:13:45.118 19:34:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.118 19:34:31 -- target/connect_stress.sh@28 -- # cat 00:13:45.118 19:34:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.118 19:34:31 -- target/connect_stress.sh@28 -- # cat 00:13:45.118 19:34:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.118 19:34:31 -- target/connect_stress.sh@28 -- # cat 00:13:45.118 19:34:31 -- target/connect_stress.sh@34 -- # kill -0 81668 00:13:45.118 19:34:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.118 19:34:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.118 19:34:31 -- common/autotest_common.sh@10 -- # set +x 00:13:45.377 19:34:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.377 19:34:32 -- target/connect_stress.sh@34 -- # kill -0 81668 00:13:45.377 19:34:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.377 19:34:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.377 19:34:32 -- common/autotest_common.sh@10 -- # set +x 00:13:45.944 19:34:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.944 19:34:32 -- target/connect_stress.sh@34 -- # kill -0 81668 00:13:45.944 19:34:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.944 19:34:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.944 19:34:32 -- common/autotest_common.sh@10 -- # set +x 00:13:46.203 19:34:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.203 19:34:32 -- target/connect_stress.sh@34 -- # kill -0 81668 00:13:46.203 19:34:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.203 19:34:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.203 19:34:32 -- common/autotest_common.sh@10 -- # set +x 00:13:46.462 19:34:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.462 19:34:33 -- target/connect_stress.sh@34 -- # kill -0 81668 00:13:46.462 19:34:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.462 19:34:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.462 19:34:33 -- common/autotest_common.sh@10 -- # set +x 00:13:46.721 19:34:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.721 19:34:33 -- target/connect_stress.sh@34 -- # kill -0 81668 00:13:46.721 19:34:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.721 19:34:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.721 19:34:33 -- common/autotest_common.sh@10 -- # set +x 00:13:46.979 19:34:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.979 19:34:33 -- target/connect_stress.sh@34 -- # kill -0 81668 00:13:46.979 19:34:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.979 19:34:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.979 19:34:33 -- common/autotest_common.sh@10 -- # set +x 00:13:47.547 19:34:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.547 19:34:34 -- target/connect_stress.sh@34 -- # kill -0 81668 00:13:47.547 19:34:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.547 19:34:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.547 19:34:34 -- common/autotest_common.sh@10 -- # set +x 00:13:47.805 19:34:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.805 19:34:34 -- target/connect_stress.sh@34 -- # kill -0 81668 00:13:47.805 19:34:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.805 19:34:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.805 19:34:34 -- common/autotest_common.sh@10 -- # set +x 00:13:48.064 19:34:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.064 19:34:34 -- target/connect_stress.sh@34 -- # kill -0 81668 00:13:48.064 19:34:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.064 19:34:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.064 19:34:34 -- common/autotest_common.sh@10 -- # set +x 00:13:48.323 19:34:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.323 19:34:35 -- target/connect_stress.sh@34 -- # kill -0 81668 00:13:48.323 19:34:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.323 19:34:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.323 19:34:35 -- common/autotest_common.sh@10 -- # set +x 00:13:48.581 19:34:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.581 19:34:35 -- target/connect_stress.sh@34 -- # kill -0 81668 00:13:48.581 19:34:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.581 19:34:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.581 19:34:35 -- common/autotest_common.sh@10 -- # set +x 00:13:49.148 19:34:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.148 19:34:35 -- target/connect_stress.sh@34 -- # kill -0 81668 00:13:49.148 19:34:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.148 19:34:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.148 19:34:35 -- common/autotest_common.sh@10 -- # set +x 00:13:49.407 19:34:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.407 19:34:36 -- target/connect_stress.sh@34 -- # kill -0 81668 00:13:49.407 19:34:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.407 19:34:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.407 19:34:36 -- common/autotest_common.sh@10 -- # set +x 00:13:49.666 19:34:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.666 19:34:36 -- target/connect_stress.sh@34 -- # kill -0 81668 00:13:49.666 19:34:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.666 19:34:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.666 19:34:36 -- common/autotest_common.sh@10 -- # set +x 00:13:49.925 19:34:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.925 19:34:36 -- target/connect_stress.sh@34 -- # kill -0 81668 00:13:49.925 19:34:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.925 19:34:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.925 19:34:36 -- common/autotest_common.sh@10 -- # set +x 00:13:50.184 19:34:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.184 19:34:37 -- target/connect_stress.sh@34 -- # kill -0 81668 00:13:50.184 19:34:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.184 19:34:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.184 19:34:37 -- common/autotest_common.sh@10 -- # set +x 00:13:50.752 19:34:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.752 19:34:37 -- target/connect_stress.sh@34 -- # kill -0 81668 00:13:50.752 19:34:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.752 19:34:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.752 19:34:37 -- common/autotest_common.sh@10 -- # set +x 00:13:51.010 19:34:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.010 19:34:37 -- target/connect_stress.sh@34 -- # kill -0 81668 00:13:51.010 19:34:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.010 19:34:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.010 19:34:37 -- common/autotest_common.sh@10 -- # set +x 00:13:51.269 19:34:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.269 19:34:38 -- target/connect_stress.sh@34 -- # kill -0 81668 00:13:51.269 19:34:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.269 19:34:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.269 19:34:38 -- common/autotest_common.sh@10 -- # set +x 00:13:51.527 19:34:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.527 19:34:38 -- target/connect_stress.sh@34 -- # kill -0 81668 00:13:51.527 19:34:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.527 19:34:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.527 19:34:38 -- common/autotest_common.sh@10 -- # set +x 00:13:52.095 19:34:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.095 19:34:38 -- target/connect_stress.sh@34 -- # kill -0 81668 00:13:52.095 19:34:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.095 19:34:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.095 19:34:38 -- common/autotest_common.sh@10 -- # set +x 00:13:52.353 19:34:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.353 19:34:39 -- target/connect_stress.sh@34 -- # kill -0 81668 00:13:52.353 19:34:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.353 19:34:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.353 19:34:39 -- common/autotest_common.sh@10 -- # set +x 00:13:52.612 19:34:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.612 19:34:39 -- target/connect_stress.sh@34 -- # kill -0 81668 00:13:52.612 19:34:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.612 19:34:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.612 19:34:39 -- common/autotest_common.sh@10 -- # set +x 00:13:52.871 19:34:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.871 19:34:39 -- target/connect_stress.sh@34 -- # kill -0 81668 00:13:52.871 19:34:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.871 19:34:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.871 19:34:39 -- common/autotest_common.sh@10 -- # set +x 00:13:53.130 19:34:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.130 19:34:39 -- target/connect_stress.sh@34 -- # kill -0 81668 00:13:53.130 19:34:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.130 19:34:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.130 19:34:39 -- common/autotest_common.sh@10 -- # set +x 00:13:53.698 19:34:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.698 19:34:40 -- target/connect_stress.sh@34 -- # kill -0 81668 00:13:53.698 19:34:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.698 19:34:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.698 19:34:40 -- common/autotest_common.sh@10 -- # set +x 00:13:53.956 19:34:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.956 19:34:40 -- target/connect_stress.sh@34 -- # kill -0 81668 00:13:53.956 19:34:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.956 19:34:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.956 19:34:40 -- common/autotest_common.sh@10 -- # set +x 00:13:54.215 19:34:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.215 19:34:40 -- target/connect_stress.sh@34 -- # kill -0 81668 00:13:54.215 19:34:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.215 19:34:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.215 19:34:40 -- common/autotest_common.sh@10 -- # set +x 00:13:54.474 19:34:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.474 19:34:41 -- target/connect_stress.sh@34 -- # kill -0 81668 00:13:54.474 19:34:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.474 19:34:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.474 19:34:41 -- common/autotest_common.sh@10 -- # set +x 00:13:54.733 19:34:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.733 19:34:41 -- target/connect_stress.sh@34 -- # kill -0 81668 00:13:54.733 19:34:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.733 19:34:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.733 19:34:41 -- common/autotest_common.sh@10 -- # set +x 00:13:55.301 19:34:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.301 19:34:41 -- target/connect_stress.sh@34 -- # kill -0 81668 00:13:55.301 19:34:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.301 19:34:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.301 19:34:41 -- common/autotest_common.sh@10 -- # set +x 00:13:55.301 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:55.560 19:34:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.560 19:34:42 -- target/connect_stress.sh@34 -- # kill -0 81668 00:13:55.560 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (81668) - No such process 00:13:55.560 19:34:42 -- target/connect_stress.sh@38 -- # wait 81668 00:13:55.560 19:34:42 -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:55.560 19:34:42 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:55.560 19:34:42 -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:55.560 19:34:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:55.560 19:34:42 -- nvmf/common.sh@116 -- # sync 00:13:55.560 19:34:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:55.560 19:34:42 -- nvmf/common.sh@119 -- # set +e 00:13:55.560 19:34:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:55.560 19:34:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:55.560 rmmod nvme_tcp 00:13:55.560 rmmod nvme_fabrics 00:13:55.560 rmmod nvme_keyring 00:13:55.560 19:34:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:55.560 19:34:42 -- nvmf/common.sh@123 -- # set -e 00:13:55.560 19:34:42 -- nvmf/common.sh@124 -- # return 0 00:13:55.560 19:34:42 -- nvmf/common.sh@477 -- # '[' -n 81616 ']' 00:13:55.560 19:34:42 -- nvmf/common.sh@478 -- # killprocess 81616 00:13:55.560 19:34:42 -- common/autotest_common.sh@936 -- # '[' -z 81616 ']' 00:13:55.560 19:34:42 -- common/autotest_common.sh@940 -- # kill -0 81616 00:13:55.560 19:34:42 -- common/autotest_common.sh@941 -- # uname 00:13:55.560 19:34:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:55.560 19:34:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81616 00:13:55.560 19:34:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:55.560 19:34:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:55.560 killing process with pid 81616 00:13:55.560 19:34:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81616' 00:13:55.560 19:34:42 -- common/autotest_common.sh@955 -- # kill 81616 00:13:55.560 19:34:42 -- common/autotest_common.sh@960 -- # wait 81616 00:13:55.818 19:34:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:55.818 19:34:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:55.818 19:34:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:55.818 19:34:42 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:55.818 19:34:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:55.818 19:34:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.818 19:34:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:55.818 19:34:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:55.818 19:34:42 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:55.818 00:13:55.818 real 0m12.582s 00:13:55.818 user 0m41.883s 00:13:55.818 sys 0m3.190s 00:13:55.818 19:34:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:55.818 19:34:42 -- common/autotest_common.sh@10 -- # set +x 00:13:55.818 ************************************ 00:13:55.818 END TEST nvmf_connect_stress 00:13:55.818 ************************************ 00:13:56.077 19:34:42 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:56.077 19:34:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:56.077 19:34:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:56.077 19:34:42 -- common/autotest_common.sh@10 -- # set +x 00:13:56.077 ************************************ 00:13:56.077 START TEST nvmf_fused_ordering 00:13:56.077 ************************************ 00:13:56.077 19:34:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:56.077 * Looking for test storage... 00:13:56.077 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:56.077 19:34:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:56.077 19:34:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:56.077 19:34:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:56.077 19:34:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:56.077 19:34:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:56.077 19:34:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:56.077 19:34:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:56.077 19:34:42 -- scripts/common.sh@335 -- # IFS=.-: 00:13:56.077 19:34:42 -- scripts/common.sh@335 -- # read -ra ver1 00:13:56.077 19:34:42 -- scripts/common.sh@336 -- # IFS=.-: 00:13:56.077 19:34:42 -- scripts/common.sh@336 -- # read -ra ver2 00:13:56.077 19:34:42 -- scripts/common.sh@337 -- # local 'op=<' 00:13:56.077 19:34:42 -- scripts/common.sh@339 -- # ver1_l=2 00:13:56.077 19:34:42 -- scripts/common.sh@340 -- # ver2_l=1 00:13:56.077 19:34:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:56.077 19:34:42 -- scripts/common.sh@343 -- # case "$op" in 00:13:56.077 19:34:42 -- scripts/common.sh@344 -- # : 1 00:13:56.077 19:34:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:56.077 19:34:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:56.077 19:34:42 -- scripts/common.sh@364 -- # decimal 1 00:13:56.077 19:34:42 -- scripts/common.sh@352 -- # local d=1 00:13:56.077 19:34:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:56.077 19:34:42 -- scripts/common.sh@354 -- # echo 1 00:13:56.077 19:34:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:56.077 19:34:42 -- scripts/common.sh@365 -- # decimal 2 00:13:56.077 19:34:42 -- scripts/common.sh@352 -- # local d=2 00:13:56.077 19:34:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:56.077 19:34:42 -- scripts/common.sh@354 -- # echo 2 00:13:56.077 19:34:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:56.077 19:34:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:56.077 19:34:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:56.077 19:34:42 -- scripts/common.sh@367 -- # return 0 00:13:56.077 19:34:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:56.077 19:34:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:56.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.077 --rc genhtml_branch_coverage=1 00:13:56.077 --rc genhtml_function_coverage=1 00:13:56.077 --rc genhtml_legend=1 00:13:56.077 --rc geninfo_all_blocks=1 00:13:56.077 --rc geninfo_unexecuted_blocks=1 00:13:56.077 00:13:56.077 ' 00:13:56.077 19:34:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:56.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.077 --rc genhtml_branch_coverage=1 00:13:56.077 --rc genhtml_function_coverage=1 00:13:56.077 --rc genhtml_legend=1 00:13:56.077 --rc geninfo_all_blocks=1 00:13:56.077 --rc geninfo_unexecuted_blocks=1 00:13:56.077 00:13:56.077 ' 00:13:56.077 19:34:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:56.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.077 --rc genhtml_branch_coverage=1 00:13:56.077 --rc genhtml_function_coverage=1 00:13:56.077 --rc genhtml_legend=1 00:13:56.077 --rc geninfo_all_blocks=1 00:13:56.077 --rc geninfo_unexecuted_blocks=1 00:13:56.077 00:13:56.077 ' 00:13:56.077 19:34:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:56.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.077 --rc genhtml_branch_coverage=1 00:13:56.078 --rc genhtml_function_coverage=1 00:13:56.078 --rc genhtml_legend=1 00:13:56.078 --rc geninfo_all_blocks=1 00:13:56.078 --rc geninfo_unexecuted_blocks=1 00:13:56.078 00:13:56.078 ' 00:13:56.078 19:34:42 -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:56.078 19:34:42 -- nvmf/common.sh@7 -- # uname -s 00:13:56.078 19:34:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:56.078 19:34:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:56.078 19:34:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:56.078 19:34:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:56.078 19:34:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:56.078 19:34:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:56.078 19:34:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:56.078 19:34:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:56.078 19:34:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:56.078 19:34:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:56.078 19:34:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:13:56.078 19:34:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:13:56.078 19:34:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:56.078 19:34:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:56.078 19:34:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:56.078 19:34:42 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:56.078 19:34:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:56.078 19:34:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:56.078 19:34:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:56.078 19:34:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.078 19:34:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.078 19:34:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.078 19:34:42 -- paths/export.sh@5 -- # export PATH 00:13:56.078 19:34:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.078 19:34:42 -- nvmf/common.sh@46 -- # : 0 00:13:56.078 19:34:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:56.078 19:34:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:56.078 19:34:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:56.078 19:34:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:56.078 19:34:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:56.078 19:34:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:56.078 19:34:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:56.078 19:34:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:56.078 19:34:42 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:56.078 19:34:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:56.078 19:34:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:56.078 19:34:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:56.078 19:34:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:56.078 19:34:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:56.078 19:34:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.078 19:34:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:56.078 19:34:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.078 19:34:42 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:56.078 19:34:42 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:56.078 19:34:42 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:56.078 19:34:42 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:56.078 19:34:42 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:56.078 19:34:42 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:56.078 19:34:42 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:56.078 19:34:42 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:56.078 19:34:42 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:56.078 19:34:42 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:56.078 19:34:42 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:56.078 19:34:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:56.078 19:34:42 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:56.078 19:34:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:56.078 19:34:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:56.078 19:34:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:56.078 19:34:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:56.078 19:34:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:56.078 19:34:42 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:56.078 19:34:42 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:56.337 Cannot find device "nvmf_tgt_br" 00:13:56.337 19:34:42 -- nvmf/common.sh@154 -- # true 00:13:56.337 19:34:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:56.337 Cannot find device "nvmf_tgt_br2" 00:13:56.337 19:34:42 -- nvmf/common.sh@155 -- # true 00:13:56.337 19:34:42 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:56.337 19:34:42 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:56.337 Cannot find device "nvmf_tgt_br" 00:13:56.337 19:34:43 -- nvmf/common.sh@157 -- # true 00:13:56.337 19:34:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:56.337 Cannot find device "nvmf_tgt_br2" 00:13:56.337 19:34:43 -- nvmf/common.sh@158 -- # true 00:13:56.337 19:34:43 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:56.337 19:34:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:56.337 19:34:43 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:56.337 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:56.337 19:34:43 -- nvmf/common.sh@161 -- # true 00:13:56.337 19:34:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:56.337 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:56.337 19:34:43 -- nvmf/common.sh@162 -- # true 00:13:56.337 19:34:43 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:56.337 19:34:43 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:56.337 19:34:43 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:56.337 19:34:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:56.337 19:34:43 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:56.337 19:34:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:56.337 19:34:43 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:56.337 19:34:43 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:56.337 19:34:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:56.337 19:34:43 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:56.337 19:34:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:56.337 19:34:43 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:56.337 19:34:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:56.337 19:34:43 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:56.337 19:34:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:56.337 19:34:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:56.337 19:34:43 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:56.337 19:34:43 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:56.337 19:34:43 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:56.337 19:34:43 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:56.595 19:34:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:56.595 19:34:43 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:56.595 19:34:43 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:56.595 19:34:43 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:56.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:56.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:13:56.595 00:13:56.595 --- 10.0.0.2 ping statistics --- 00:13:56.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.596 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:13:56.596 19:34:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:56.596 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:56.596 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:13:56.596 00:13:56.596 --- 10.0.0.3 ping statistics --- 00:13:56.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.596 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:13:56.596 19:34:43 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:56.596 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:56.596 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:13:56.596 00:13:56.596 --- 10.0.0.1 ping statistics --- 00:13:56.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.596 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:13:56.596 19:34:43 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:56.596 19:34:43 -- nvmf/common.sh@421 -- # return 0 00:13:56.596 19:34:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:56.596 19:34:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:56.596 19:34:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:56.596 19:34:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:56.596 19:34:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:56.596 19:34:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:56.596 19:34:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:56.596 19:34:43 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:56.596 19:34:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:56.596 19:34:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:56.596 19:34:43 -- common/autotest_common.sh@10 -- # set +x 00:13:56.596 19:34:43 -- nvmf/common.sh@469 -- # nvmfpid=82011 00:13:56.596 19:34:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:56.596 19:34:43 -- nvmf/common.sh@470 -- # waitforlisten 82011 00:13:56.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.596 19:34:43 -- common/autotest_common.sh@829 -- # '[' -z 82011 ']' 00:13:56.596 19:34:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.596 19:34:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:56.596 19:34:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.596 19:34:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:56.596 19:34:43 -- common/autotest_common.sh@10 -- # set +x 00:13:56.596 [2024-12-15 19:34:43.352656] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:56.596 [2024-12-15 19:34:43.352987] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:56.854 [2024-12-15 19:34:43.492041] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.854 [2024-12-15 19:34:43.565063] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:56.854 [2024-12-15 19:34:43.565543] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:56.854 [2024-12-15 19:34:43.565657] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:56.854 [2024-12-15 19:34:43.565788] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:56.854 [2024-12-15 19:34:43.566098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.789 19:34:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:57.789 19:34:44 -- common/autotest_common.sh@862 -- # return 0 00:13:57.789 19:34:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:57.789 19:34:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:57.790 19:34:44 -- common/autotest_common.sh@10 -- # set +x 00:13:57.790 19:34:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:57.790 19:34:44 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:57.790 19:34:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.790 19:34:44 -- common/autotest_common.sh@10 -- # set +x 00:13:57.790 [2024-12-15 19:34:44.372989] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:57.790 19:34:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.790 19:34:44 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:57.790 19:34:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.790 19:34:44 -- common/autotest_common.sh@10 -- # set +x 00:13:57.790 19:34:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.790 19:34:44 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:57.790 19:34:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.790 19:34:44 -- common/autotest_common.sh@10 -- # set +x 00:13:57.790 [2024-12-15 19:34:44.389078] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:57.790 19:34:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.790 19:34:44 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:57.790 19:34:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.790 19:34:44 -- common/autotest_common.sh@10 -- # set +x 00:13:57.790 NULL1 00:13:57.790 19:34:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.790 19:34:44 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:57.790 19:34:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.790 19:34:44 -- common/autotest_common.sh@10 -- # set +x 00:13:57.790 19:34:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.790 19:34:44 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:57.790 19:34:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.790 19:34:44 -- common/autotest_common.sh@10 -- # set +x 00:13:57.790 19:34:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.790 19:34:44 -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:57.790 [2024-12-15 19:34:44.437547] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:57.790 [2024-12-15 19:34:44.437791] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82061 ] 00:13:58.049 Attached to nqn.2016-06.io.spdk:cnode1 00:13:58.049 Namespace ID: 1 size: 1GB 00:13:58.049 fused_ordering(0) 00:13:58.049 fused_ordering(1) 00:13:58.049 fused_ordering(2) 00:13:58.049 fused_ordering(3) 00:13:58.049 fused_ordering(4) 00:13:58.049 fused_ordering(5) 00:13:58.049 fused_ordering(6) 00:13:58.049 fused_ordering(7) 00:13:58.049 fused_ordering(8) 00:13:58.049 fused_ordering(9) 00:13:58.049 fused_ordering(10) 00:13:58.049 fused_ordering(11) 00:13:58.049 fused_ordering(12) 00:13:58.049 fused_ordering(13) 00:13:58.049 fused_ordering(14) 00:13:58.049 fused_ordering(15) 00:13:58.049 fused_ordering(16) 00:13:58.049 fused_ordering(17) 00:13:58.049 fused_ordering(18) 00:13:58.049 fused_ordering(19) 00:13:58.049 fused_ordering(20) 00:13:58.049 fused_ordering(21) 00:13:58.049 fused_ordering(22) 00:13:58.049 fused_ordering(23) 00:13:58.049 fused_ordering(24) 00:13:58.049 fused_ordering(25) 00:13:58.049 fused_ordering(26) 00:13:58.049 fused_ordering(27) 00:13:58.049 fused_ordering(28) 00:13:58.049 fused_ordering(29) 00:13:58.049 fused_ordering(30) 00:13:58.049 fused_ordering(31) 00:13:58.049 fused_ordering(32) 00:13:58.049 fused_ordering(33) 00:13:58.049 fused_ordering(34) 00:13:58.049 fused_ordering(35) 00:13:58.049 fused_ordering(36) 00:13:58.049 fused_ordering(37) 00:13:58.049 fused_ordering(38) 00:13:58.049 fused_ordering(39) 00:13:58.049 fused_ordering(40) 00:13:58.049 fused_ordering(41) 00:13:58.049 fused_ordering(42) 00:13:58.049 fused_ordering(43) 00:13:58.049 fused_ordering(44) 00:13:58.049 fused_ordering(45) 00:13:58.049 fused_ordering(46) 00:13:58.049 fused_ordering(47) 00:13:58.049 fused_ordering(48) 00:13:58.049 fused_ordering(49) 00:13:58.049 fused_ordering(50) 00:13:58.049 fused_ordering(51) 00:13:58.049 fused_ordering(52) 00:13:58.049 fused_ordering(53) 00:13:58.049 fused_ordering(54) 00:13:58.049 fused_ordering(55) 00:13:58.049 fused_ordering(56) 00:13:58.049 fused_ordering(57) 00:13:58.049 fused_ordering(58) 00:13:58.049 fused_ordering(59) 00:13:58.049 fused_ordering(60) 00:13:58.049 fused_ordering(61) 00:13:58.049 fused_ordering(62) 00:13:58.049 fused_ordering(63) 00:13:58.049 fused_ordering(64) 00:13:58.049 fused_ordering(65) 00:13:58.049 fused_ordering(66) 00:13:58.049 fused_ordering(67) 00:13:58.049 fused_ordering(68) 00:13:58.049 fused_ordering(69) 00:13:58.049 fused_ordering(70) 00:13:58.049 fused_ordering(71) 00:13:58.049 fused_ordering(72) 00:13:58.049 fused_ordering(73) 00:13:58.049 fused_ordering(74) 00:13:58.049 fused_ordering(75) 00:13:58.049 fused_ordering(76) 00:13:58.049 fused_ordering(77) 00:13:58.049 fused_ordering(78) 00:13:58.049 fused_ordering(79) 00:13:58.049 fused_ordering(80) 00:13:58.049 fused_ordering(81) 00:13:58.049 fused_ordering(82) 00:13:58.049 fused_ordering(83) 00:13:58.049 fused_ordering(84) 00:13:58.049 fused_ordering(85) 00:13:58.049 fused_ordering(86) 00:13:58.049 fused_ordering(87) 00:13:58.049 fused_ordering(88) 00:13:58.049 fused_ordering(89) 00:13:58.049 fused_ordering(90) 00:13:58.049 fused_ordering(91) 00:13:58.049 fused_ordering(92) 00:13:58.049 fused_ordering(93) 00:13:58.049 fused_ordering(94) 00:13:58.049 fused_ordering(95) 00:13:58.049 fused_ordering(96) 00:13:58.049 fused_ordering(97) 00:13:58.049 fused_ordering(98) 00:13:58.049 fused_ordering(99) 00:13:58.049 fused_ordering(100) 00:13:58.049 fused_ordering(101) 00:13:58.049 fused_ordering(102) 00:13:58.049 fused_ordering(103) 00:13:58.049 fused_ordering(104) 00:13:58.049 fused_ordering(105) 00:13:58.049 fused_ordering(106) 00:13:58.049 fused_ordering(107) 00:13:58.049 fused_ordering(108) 00:13:58.049 fused_ordering(109) 00:13:58.049 fused_ordering(110) 00:13:58.049 fused_ordering(111) 00:13:58.049 fused_ordering(112) 00:13:58.049 fused_ordering(113) 00:13:58.049 fused_ordering(114) 00:13:58.049 fused_ordering(115) 00:13:58.049 fused_ordering(116) 00:13:58.049 fused_ordering(117) 00:13:58.049 fused_ordering(118) 00:13:58.049 fused_ordering(119) 00:13:58.049 fused_ordering(120) 00:13:58.049 fused_ordering(121) 00:13:58.049 fused_ordering(122) 00:13:58.049 fused_ordering(123) 00:13:58.049 fused_ordering(124) 00:13:58.049 fused_ordering(125) 00:13:58.049 fused_ordering(126) 00:13:58.049 fused_ordering(127) 00:13:58.049 fused_ordering(128) 00:13:58.049 fused_ordering(129) 00:13:58.049 fused_ordering(130) 00:13:58.049 fused_ordering(131) 00:13:58.049 fused_ordering(132) 00:13:58.049 fused_ordering(133) 00:13:58.049 fused_ordering(134) 00:13:58.049 fused_ordering(135) 00:13:58.049 fused_ordering(136) 00:13:58.049 fused_ordering(137) 00:13:58.049 fused_ordering(138) 00:13:58.049 fused_ordering(139) 00:13:58.049 fused_ordering(140) 00:13:58.049 fused_ordering(141) 00:13:58.049 fused_ordering(142) 00:13:58.049 fused_ordering(143) 00:13:58.049 fused_ordering(144) 00:13:58.049 fused_ordering(145) 00:13:58.049 fused_ordering(146) 00:13:58.049 fused_ordering(147) 00:13:58.049 fused_ordering(148) 00:13:58.049 fused_ordering(149) 00:13:58.049 fused_ordering(150) 00:13:58.049 fused_ordering(151) 00:13:58.049 fused_ordering(152) 00:13:58.049 fused_ordering(153) 00:13:58.049 fused_ordering(154) 00:13:58.049 fused_ordering(155) 00:13:58.049 fused_ordering(156) 00:13:58.049 fused_ordering(157) 00:13:58.049 fused_ordering(158) 00:13:58.049 fused_ordering(159) 00:13:58.049 fused_ordering(160) 00:13:58.049 fused_ordering(161) 00:13:58.049 fused_ordering(162) 00:13:58.050 fused_ordering(163) 00:13:58.050 fused_ordering(164) 00:13:58.050 fused_ordering(165) 00:13:58.050 fused_ordering(166) 00:13:58.050 fused_ordering(167) 00:13:58.050 fused_ordering(168) 00:13:58.050 fused_ordering(169) 00:13:58.050 fused_ordering(170) 00:13:58.050 fused_ordering(171) 00:13:58.050 fused_ordering(172) 00:13:58.050 fused_ordering(173) 00:13:58.050 fused_ordering(174) 00:13:58.050 fused_ordering(175) 00:13:58.050 fused_ordering(176) 00:13:58.050 fused_ordering(177) 00:13:58.050 fused_ordering(178) 00:13:58.050 fused_ordering(179) 00:13:58.050 fused_ordering(180) 00:13:58.050 fused_ordering(181) 00:13:58.050 fused_ordering(182) 00:13:58.050 fused_ordering(183) 00:13:58.050 fused_ordering(184) 00:13:58.050 fused_ordering(185) 00:13:58.050 fused_ordering(186) 00:13:58.050 fused_ordering(187) 00:13:58.050 fused_ordering(188) 00:13:58.050 fused_ordering(189) 00:13:58.050 fused_ordering(190) 00:13:58.050 fused_ordering(191) 00:13:58.050 fused_ordering(192) 00:13:58.050 fused_ordering(193) 00:13:58.050 fused_ordering(194) 00:13:58.050 fused_ordering(195) 00:13:58.050 fused_ordering(196) 00:13:58.050 fused_ordering(197) 00:13:58.050 fused_ordering(198) 00:13:58.050 fused_ordering(199) 00:13:58.050 fused_ordering(200) 00:13:58.050 fused_ordering(201) 00:13:58.050 fused_ordering(202) 00:13:58.050 fused_ordering(203) 00:13:58.050 fused_ordering(204) 00:13:58.050 fused_ordering(205) 00:13:58.309 fused_ordering(206) 00:13:58.309 fused_ordering(207) 00:13:58.309 fused_ordering(208) 00:13:58.309 fused_ordering(209) 00:13:58.309 fused_ordering(210) 00:13:58.309 fused_ordering(211) 00:13:58.309 fused_ordering(212) 00:13:58.309 fused_ordering(213) 00:13:58.309 fused_ordering(214) 00:13:58.309 fused_ordering(215) 00:13:58.309 fused_ordering(216) 00:13:58.309 fused_ordering(217) 00:13:58.309 fused_ordering(218) 00:13:58.309 fused_ordering(219) 00:13:58.309 fused_ordering(220) 00:13:58.309 fused_ordering(221) 00:13:58.309 fused_ordering(222) 00:13:58.309 fused_ordering(223) 00:13:58.309 fused_ordering(224) 00:13:58.309 fused_ordering(225) 00:13:58.309 fused_ordering(226) 00:13:58.309 fused_ordering(227) 00:13:58.309 fused_ordering(228) 00:13:58.309 fused_ordering(229) 00:13:58.309 fused_ordering(230) 00:13:58.309 fused_ordering(231) 00:13:58.309 fused_ordering(232) 00:13:58.309 fused_ordering(233) 00:13:58.309 fused_ordering(234) 00:13:58.309 fused_ordering(235) 00:13:58.309 fused_ordering(236) 00:13:58.309 fused_ordering(237) 00:13:58.309 fused_ordering(238) 00:13:58.309 fused_ordering(239) 00:13:58.309 fused_ordering(240) 00:13:58.309 fused_ordering(241) 00:13:58.309 fused_ordering(242) 00:13:58.309 fused_ordering(243) 00:13:58.309 fused_ordering(244) 00:13:58.309 fused_ordering(245) 00:13:58.309 fused_ordering(246) 00:13:58.309 fused_ordering(247) 00:13:58.309 fused_ordering(248) 00:13:58.309 fused_ordering(249) 00:13:58.309 fused_ordering(250) 00:13:58.309 fused_ordering(251) 00:13:58.309 fused_ordering(252) 00:13:58.309 fused_ordering(253) 00:13:58.309 fused_ordering(254) 00:13:58.309 fused_ordering(255) 00:13:58.309 fused_ordering(256) 00:13:58.309 fused_ordering(257) 00:13:58.309 fused_ordering(258) 00:13:58.309 fused_ordering(259) 00:13:58.309 fused_ordering(260) 00:13:58.309 fused_ordering(261) 00:13:58.309 fused_ordering(262) 00:13:58.309 fused_ordering(263) 00:13:58.309 fused_ordering(264) 00:13:58.309 fused_ordering(265) 00:13:58.309 fused_ordering(266) 00:13:58.309 fused_ordering(267) 00:13:58.309 fused_ordering(268) 00:13:58.309 fused_ordering(269) 00:13:58.309 fused_ordering(270) 00:13:58.309 fused_ordering(271) 00:13:58.309 fused_ordering(272) 00:13:58.309 fused_ordering(273) 00:13:58.309 fused_ordering(274) 00:13:58.309 fused_ordering(275) 00:13:58.309 fused_ordering(276) 00:13:58.309 fused_ordering(277) 00:13:58.309 fused_ordering(278) 00:13:58.309 fused_ordering(279) 00:13:58.309 fused_ordering(280) 00:13:58.309 fused_ordering(281) 00:13:58.309 fused_ordering(282) 00:13:58.309 fused_ordering(283) 00:13:58.309 fused_ordering(284) 00:13:58.309 fused_ordering(285) 00:13:58.309 fused_ordering(286) 00:13:58.309 fused_ordering(287) 00:13:58.309 fused_ordering(288) 00:13:58.309 fused_ordering(289) 00:13:58.309 fused_ordering(290) 00:13:58.309 fused_ordering(291) 00:13:58.309 fused_ordering(292) 00:13:58.309 fused_ordering(293) 00:13:58.309 fused_ordering(294) 00:13:58.309 fused_ordering(295) 00:13:58.309 fused_ordering(296) 00:13:58.309 fused_ordering(297) 00:13:58.309 fused_ordering(298) 00:13:58.309 fused_ordering(299) 00:13:58.309 fused_ordering(300) 00:13:58.309 fused_ordering(301) 00:13:58.309 fused_ordering(302) 00:13:58.309 fused_ordering(303) 00:13:58.309 fused_ordering(304) 00:13:58.309 fused_ordering(305) 00:13:58.309 fused_ordering(306) 00:13:58.309 fused_ordering(307) 00:13:58.309 fused_ordering(308) 00:13:58.309 fused_ordering(309) 00:13:58.309 fused_ordering(310) 00:13:58.309 fused_ordering(311) 00:13:58.309 fused_ordering(312) 00:13:58.309 fused_ordering(313) 00:13:58.309 fused_ordering(314) 00:13:58.309 fused_ordering(315) 00:13:58.309 fused_ordering(316) 00:13:58.309 fused_ordering(317) 00:13:58.309 fused_ordering(318) 00:13:58.309 fused_ordering(319) 00:13:58.309 fused_ordering(320) 00:13:58.309 fused_ordering(321) 00:13:58.309 fused_ordering(322) 00:13:58.309 fused_ordering(323) 00:13:58.309 fused_ordering(324) 00:13:58.309 fused_ordering(325) 00:13:58.309 fused_ordering(326) 00:13:58.309 fused_ordering(327) 00:13:58.309 fused_ordering(328) 00:13:58.309 fused_ordering(329) 00:13:58.309 fused_ordering(330) 00:13:58.309 fused_ordering(331) 00:13:58.309 fused_ordering(332) 00:13:58.309 fused_ordering(333) 00:13:58.309 fused_ordering(334) 00:13:58.309 fused_ordering(335) 00:13:58.309 fused_ordering(336) 00:13:58.309 fused_ordering(337) 00:13:58.309 fused_ordering(338) 00:13:58.309 fused_ordering(339) 00:13:58.309 fused_ordering(340) 00:13:58.309 fused_ordering(341) 00:13:58.309 fused_ordering(342) 00:13:58.309 fused_ordering(343) 00:13:58.309 fused_ordering(344) 00:13:58.309 fused_ordering(345) 00:13:58.309 fused_ordering(346) 00:13:58.309 fused_ordering(347) 00:13:58.309 fused_ordering(348) 00:13:58.309 fused_ordering(349) 00:13:58.309 fused_ordering(350) 00:13:58.309 fused_ordering(351) 00:13:58.309 fused_ordering(352) 00:13:58.309 fused_ordering(353) 00:13:58.309 fused_ordering(354) 00:13:58.309 fused_ordering(355) 00:13:58.309 fused_ordering(356) 00:13:58.309 fused_ordering(357) 00:13:58.309 fused_ordering(358) 00:13:58.309 fused_ordering(359) 00:13:58.309 fused_ordering(360) 00:13:58.309 fused_ordering(361) 00:13:58.309 fused_ordering(362) 00:13:58.309 fused_ordering(363) 00:13:58.309 fused_ordering(364) 00:13:58.309 fused_ordering(365) 00:13:58.309 fused_ordering(366) 00:13:58.309 fused_ordering(367) 00:13:58.309 fused_ordering(368) 00:13:58.309 fused_ordering(369) 00:13:58.309 fused_ordering(370) 00:13:58.309 fused_ordering(371) 00:13:58.309 fused_ordering(372) 00:13:58.309 fused_ordering(373) 00:13:58.309 fused_ordering(374) 00:13:58.309 fused_ordering(375) 00:13:58.309 fused_ordering(376) 00:13:58.309 fused_ordering(377) 00:13:58.309 fused_ordering(378) 00:13:58.309 fused_ordering(379) 00:13:58.309 fused_ordering(380) 00:13:58.309 fused_ordering(381) 00:13:58.309 fused_ordering(382) 00:13:58.309 fused_ordering(383) 00:13:58.309 fused_ordering(384) 00:13:58.309 fused_ordering(385) 00:13:58.309 fused_ordering(386) 00:13:58.309 fused_ordering(387) 00:13:58.309 fused_ordering(388) 00:13:58.309 fused_ordering(389) 00:13:58.309 fused_ordering(390) 00:13:58.310 fused_ordering(391) 00:13:58.310 fused_ordering(392) 00:13:58.310 fused_ordering(393) 00:13:58.310 fused_ordering(394) 00:13:58.310 fused_ordering(395) 00:13:58.310 fused_ordering(396) 00:13:58.310 fused_ordering(397) 00:13:58.310 fused_ordering(398) 00:13:58.310 fused_ordering(399) 00:13:58.310 fused_ordering(400) 00:13:58.310 fused_ordering(401) 00:13:58.310 fused_ordering(402) 00:13:58.310 fused_ordering(403) 00:13:58.310 fused_ordering(404) 00:13:58.310 fused_ordering(405) 00:13:58.310 fused_ordering(406) 00:13:58.310 fused_ordering(407) 00:13:58.310 fused_ordering(408) 00:13:58.310 fused_ordering(409) 00:13:58.310 fused_ordering(410) 00:13:58.569 fused_ordering(411) 00:13:58.569 fused_ordering(412) 00:13:58.569 fused_ordering(413) 00:13:58.569 fused_ordering(414) 00:13:58.569 fused_ordering(415) 00:13:58.569 fused_ordering(416) 00:13:58.569 fused_ordering(417) 00:13:58.569 fused_ordering(418) 00:13:58.569 fused_ordering(419) 00:13:58.569 fused_ordering(420) 00:13:58.569 fused_ordering(421) 00:13:58.569 fused_ordering(422) 00:13:58.569 fused_ordering(423) 00:13:58.569 fused_ordering(424) 00:13:58.569 fused_ordering(425) 00:13:58.569 fused_ordering(426) 00:13:58.569 fused_ordering(427) 00:13:58.569 fused_ordering(428) 00:13:58.569 fused_ordering(429) 00:13:58.569 fused_ordering(430) 00:13:58.569 fused_ordering(431) 00:13:58.569 fused_ordering(432) 00:13:58.569 fused_ordering(433) 00:13:58.569 fused_ordering(434) 00:13:58.569 fused_ordering(435) 00:13:58.569 fused_ordering(436) 00:13:58.569 fused_ordering(437) 00:13:58.569 fused_ordering(438) 00:13:58.569 fused_ordering(439) 00:13:58.569 fused_ordering(440) 00:13:58.569 fused_ordering(441) 00:13:58.569 fused_ordering(442) 00:13:58.569 fused_ordering(443) 00:13:58.569 fused_ordering(444) 00:13:58.569 fused_ordering(445) 00:13:58.569 fused_ordering(446) 00:13:58.569 fused_ordering(447) 00:13:58.569 fused_ordering(448) 00:13:58.569 fused_ordering(449) 00:13:58.569 fused_ordering(450) 00:13:58.569 fused_ordering(451) 00:13:58.569 fused_ordering(452) 00:13:58.569 fused_ordering(453) 00:13:58.569 fused_ordering(454) 00:13:58.569 fused_ordering(455) 00:13:58.569 fused_ordering(456) 00:13:58.569 fused_ordering(457) 00:13:58.569 fused_ordering(458) 00:13:58.569 fused_ordering(459) 00:13:58.569 fused_ordering(460) 00:13:58.569 fused_ordering(461) 00:13:58.569 fused_ordering(462) 00:13:58.569 fused_ordering(463) 00:13:58.570 fused_ordering(464) 00:13:58.570 fused_ordering(465) 00:13:58.570 fused_ordering(466) 00:13:58.570 fused_ordering(467) 00:13:58.570 fused_ordering(468) 00:13:58.570 fused_ordering(469) 00:13:58.570 fused_ordering(470) 00:13:58.570 fused_ordering(471) 00:13:58.570 fused_ordering(472) 00:13:58.570 fused_ordering(473) 00:13:58.570 fused_ordering(474) 00:13:58.570 fused_ordering(475) 00:13:58.570 fused_ordering(476) 00:13:58.570 fused_ordering(477) 00:13:58.570 fused_ordering(478) 00:13:58.570 fused_ordering(479) 00:13:58.570 fused_ordering(480) 00:13:58.570 fused_ordering(481) 00:13:58.570 fused_ordering(482) 00:13:58.570 fused_ordering(483) 00:13:58.570 fused_ordering(484) 00:13:58.570 fused_ordering(485) 00:13:58.570 fused_ordering(486) 00:13:58.570 fused_ordering(487) 00:13:58.570 fused_ordering(488) 00:13:58.570 fused_ordering(489) 00:13:58.570 fused_ordering(490) 00:13:58.570 fused_ordering(491) 00:13:58.570 fused_ordering(492) 00:13:58.570 fused_ordering(493) 00:13:58.570 fused_ordering(494) 00:13:58.570 fused_ordering(495) 00:13:58.570 fused_ordering(496) 00:13:58.570 fused_ordering(497) 00:13:58.570 fused_ordering(498) 00:13:58.570 fused_ordering(499) 00:13:58.570 fused_ordering(500) 00:13:58.570 fused_ordering(501) 00:13:58.570 fused_ordering(502) 00:13:58.570 fused_ordering(503) 00:13:58.570 fused_ordering(504) 00:13:58.570 fused_ordering(505) 00:13:58.570 fused_ordering(506) 00:13:58.570 fused_ordering(507) 00:13:58.570 fused_ordering(508) 00:13:58.570 fused_ordering(509) 00:13:58.570 fused_ordering(510) 00:13:58.570 fused_ordering(511) 00:13:58.570 fused_ordering(512) 00:13:58.570 fused_ordering(513) 00:13:58.570 fused_ordering(514) 00:13:58.570 fused_ordering(515) 00:13:58.570 fused_ordering(516) 00:13:58.570 fused_ordering(517) 00:13:58.570 fused_ordering(518) 00:13:58.570 fused_ordering(519) 00:13:58.570 fused_ordering(520) 00:13:58.570 fused_ordering(521) 00:13:58.570 fused_ordering(522) 00:13:58.570 fused_ordering(523) 00:13:58.570 fused_ordering(524) 00:13:58.570 fused_ordering(525) 00:13:58.570 fused_ordering(526) 00:13:58.570 fused_ordering(527) 00:13:58.570 fused_ordering(528) 00:13:58.570 fused_ordering(529) 00:13:58.570 fused_ordering(530) 00:13:58.570 fused_ordering(531) 00:13:58.570 fused_ordering(532) 00:13:58.570 fused_ordering(533) 00:13:58.570 fused_ordering(534) 00:13:58.570 fused_ordering(535) 00:13:58.570 fused_ordering(536) 00:13:58.570 fused_ordering(537) 00:13:58.570 fused_ordering(538) 00:13:58.570 fused_ordering(539) 00:13:58.570 fused_ordering(540) 00:13:58.570 fused_ordering(541) 00:13:58.570 fused_ordering(542) 00:13:58.570 fused_ordering(543) 00:13:58.570 fused_ordering(544) 00:13:58.570 fused_ordering(545) 00:13:58.570 fused_ordering(546) 00:13:58.570 fused_ordering(547) 00:13:58.570 fused_ordering(548) 00:13:58.570 fused_ordering(549) 00:13:58.570 fused_ordering(550) 00:13:58.570 fused_ordering(551) 00:13:58.570 fused_ordering(552) 00:13:58.570 fused_ordering(553) 00:13:58.570 fused_ordering(554) 00:13:58.570 fused_ordering(555) 00:13:58.570 fused_ordering(556) 00:13:58.570 fused_ordering(557) 00:13:58.570 fused_ordering(558) 00:13:58.570 fused_ordering(559) 00:13:58.570 fused_ordering(560) 00:13:58.570 fused_ordering(561) 00:13:58.570 fused_ordering(562) 00:13:58.570 fused_ordering(563) 00:13:58.570 fused_ordering(564) 00:13:58.570 fused_ordering(565) 00:13:58.570 fused_ordering(566) 00:13:58.570 fused_ordering(567) 00:13:58.570 fused_ordering(568) 00:13:58.570 fused_ordering(569) 00:13:58.570 fused_ordering(570) 00:13:58.570 fused_ordering(571) 00:13:58.570 fused_ordering(572) 00:13:58.570 fused_ordering(573) 00:13:58.570 fused_ordering(574) 00:13:58.570 fused_ordering(575) 00:13:58.570 fused_ordering(576) 00:13:58.570 fused_ordering(577) 00:13:58.570 fused_ordering(578) 00:13:58.570 fused_ordering(579) 00:13:58.570 fused_ordering(580) 00:13:58.570 fused_ordering(581) 00:13:58.570 fused_ordering(582) 00:13:58.570 fused_ordering(583) 00:13:58.570 fused_ordering(584) 00:13:58.570 fused_ordering(585) 00:13:58.570 fused_ordering(586) 00:13:58.570 fused_ordering(587) 00:13:58.570 fused_ordering(588) 00:13:58.570 fused_ordering(589) 00:13:58.570 fused_ordering(590) 00:13:58.570 fused_ordering(591) 00:13:58.570 fused_ordering(592) 00:13:58.570 fused_ordering(593) 00:13:58.570 fused_ordering(594) 00:13:58.570 fused_ordering(595) 00:13:58.570 fused_ordering(596) 00:13:58.570 fused_ordering(597) 00:13:58.570 fused_ordering(598) 00:13:58.570 fused_ordering(599) 00:13:58.570 fused_ordering(600) 00:13:58.570 fused_ordering(601) 00:13:58.570 fused_ordering(602) 00:13:58.570 fused_ordering(603) 00:13:58.570 fused_ordering(604) 00:13:58.570 fused_ordering(605) 00:13:58.570 fused_ordering(606) 00:13:58.570 fused_ordering(607) 00:13:58.570 fused_ordering(608) 00:13:58.570 fused_ordering(609) 00:13:58.570 fused_ordering(610) 00:13:58.570 fused_ordering(611) 00:13:58.570 fused_ordering(612) 00:13:58.570 fused_ordering(613) 00:13:58.570 fused_ordering(614) 00:13:58.570 fused_ordering(615) 00:13:59.137 fused_ordering(616) 00:13:59.137 fused_ordering(617) 00:13:59.137 fused_ordering(618) 00:13:59.137 fused_ordering(619) 00:13:59.137 fused_ordering(620) 00:13:59.137 fused_ordering(621) 00:13:59.137 fused_ordering(622) 00:13:59.137 fused_ordering(623) 00:13:59.137 fused_ordering(624) 00:13:59.137 fused_ordering(625) 00:13:59.137 fused_ordering(626) 00:13:59.137 fused_ordering(627) 00:13:59.137 fused_ordering(628) 00:13:59.137 fused_ordering(629) 00:13:59.137 fused_ordering(630) 00:13:59.137 fused_ordering(631) 00:13:59.137 fused_ordering(632) 00:13:59.137 fused_ordering(633) 00:13:59.137 fused_ordering(634) 00:13:59.137 fused_ordering(635) 00:13:59.137 fused_ordering(636) 00:13:59.137 fused_ordering(637) 00:13:59.137 fused_ordering(638) 00:13:59.137 fused_ordering(639) 00:13:59.137 fused_ordering(640) 00:13:59.137 fused_ordering(641) 00:13:59.137 fused_ordering(642) 00:13:59.137 fused_ordering(643) 00:13:59.137 fused_ordering(644) 00:13:59.137 fused_ordering(645) 00:13:59.137 fused_ordering(646) 00:13:59.137 fused_ordering(647) 00:13:59.137 fused_ordering(648) 00:13:59.137 fused_ordering(649) 00:13:59.137 fused_ordering(650) 00:13:59.137 fused_ordering(651) 00:13:59.137 fused_ordering(652) 00:13:59.137 fused_ordering(653) 00:13:59.137 fused_ordering(654) 00:13:59.137 fused_ordering(655) 00:13:59.137 fused_ordering(656) 00:13:59.137 fused_ordering(657) 00:13:59.137 fused_ordering(658) 00:13:59.137 fused_ordering(659) 00:13:59.137 fused_ordering(660) 00:13:59.137 fused_ordering(661) 00:13:59.137 fused_ordering(662) 00:13:59.137 fused_ordering(663) 00:13:59.137 fused_ordering(664) 00:13:59.137 fused_ordering(665) 00:13:59.137 fused_ordering(666) 00:13:59.137 fused_ordering(667) 00:13:59.137 fused_ordering(668) 00:13:59.137 fused_ordering(669) 00:13:59.137 fused_ordering(670) 00:13:59.137 fused_ordering(671) 00:13:59.137 fused_ordering(672) 00:13:59.138 fused_ordering(673) 00:13:59.138 fused_ordering(674) 00:13:59.138 fused_ordering(675) 00:13:59.138 fused_ordering(676) 00:13:59.138 fused_ordering(677) 00:13:59.138 fused_ordering(678) 00:13:59.138 fused_ordering(679) 00:13:59.138 fused_ordering(680) 00:13:59.138 fused_ordering(681) 00:13:59.138 fused_ordering(682) 00:13:59.138 fused_ordering(683) 00:13:59.138 fused_ordering(684) 00:13:59.138 fused_ordering(685) 00:13:59.138 fused_ordering(686) 00:13:59.138 fused_ordering(687) 00:13:59.138 fused_ordering(688) 00:13:59.138 fused_ordering(689) 00:13:59.138 fused_ordering(690) 00:13:59.138 fused_ordering(691) 00:13:59.138 fused_ordering(692) 00:13:59.138 fused_ordering(693) 00:13:59.138 fused_ordering(694) 00:13:59.138 fused_ordering(695) 00:13:59.138 fused_ordering(696) 00:13:59.138 fused_ordering(697) 00:13:59.138 fused_ordering(698) 00:13:59.138 fused_ordering(699) 00:13:59.138 fused_ordering(700) 00:13:59.138 fused_ordering(701) 00:13:59.138 fused_ordering(702) 00:13:59.138 fused_ordering(703) 00:13:59.138 fused_ordering(704) 00:13:59.138 fused_ordering(705) 00:13:59.138 fused_ordering(706) 00:13:59.138 fused_ordering(707) 00:13:59.138 fused_ordering(708) 00:13:59.138 fused_ordering(709) 00:13:59.138 fused_ordering(710) 00:13:59.138 fused_ordering(711) 00:13:59.138 fused_ordering(712) 00:13:59.138 fused_ordering(713) 00:13:59.138 fused_ordering(714) 00:13:59.138 fused_ordering(715) 00:13:59.138 fused_ordering(716) 00:13:59.138 fused_ordering(717) 00:13:59.138 fused_ordering(718) 00:13:59.138 fused_ordering(719) 00:13:59.138 fused_ordering(720) 00:13:59.138 fused_ordering(721) 00:13:59.138 fused_ordering(722) 00:13:59.138 fused_ordering(723) 00:13:59.138 fused_ordering(724) 00:13:59.138 fused_ordering(725) 00:13:59.138 fused_ordering(726) 00:13:59.138 fused_ordering(727) 00:13:59.138 fused_ordering(728) 00:13:59.138 fused_ordering(729) 00:13:59.138 fused_ordering(730) 00:13:59.138 fused_ordering(731) 00:13:59.138 fused_ordering(732) 00:13:59.138 fused_ordering(733) 00:13:59.138 fused_ordering(734) 00:13:59.138 fused_ordering(735) 00:13:59.138 fused_ordering(736) 00:13:59.138 fused_ordering(737) 00:13:59.138 fused_ordering(738) 00:13:59.138 fused_ordering(739) 00:13:59.138 fused_ordering(740) 00:13:59.138 fused_ordering(741) 00:13:59.138 fused_ordering(742) 00:13:59.138 fused_ordering(743) 00:13:59.138 fused_ordering(744) 00:13:59.138 fused_ordering(745) 00:13:59.138 fused_ordering(746) 00:13:59.138 fused_ordering(747) 00:13:59.138 fused_ordering(748) 00:13:59.138 fused_ordering(749) 00:13:59.138 fused_ordering(750) 00:13:59.138 fused_ordering(751) 00:13:59.138 fused_ordering(752) 00:13:59.138 fused_ordering(753) 00:13:59.138 fused_ordering(754) 00:13:59.138 fused_ordering(755) 00:13:59.138 fused_ordering(756) 00:13:59.138 fused_ordering(757) 00:13:59.138 fused_ordering(758) 00:13:59.138 fused_ordering(759) 00:13:59.138 fused_ordering(760) 00:13:59.138 fused_ordering(761) 00:13:59.138 fused_ordering(762) 00:13:59.138 fused_ordering(763) 00:13:59.138 fused_ordering(764) 00:13:59.138 fused_ordering(765) 00:13:59.138 fused_ordering(766) 00:13:59.138 fused_ordering(767) 00:13:59.138 fused_ordering(768) 00:13:59.138 fused_ordering(769) 00:13:59.138 fused_ordering(770) 00:13:59.138 fused_ordering(771) 00:13:59.138 fused_ordering(772) 00:13:59.138 fused_ordering(773) 00:13:59.138 fused_ordering(774) 00:13:59.138 fused_ordering(775) 00:13:59.138 fused_ordering(776) 00:13:59.138 fused_ordering(777) 00:13:59.138 fused_ordering(778) 00:13:59.138 fused_ordering(779) 00:13:59.138 fused_ordering(780) 00:13:59.138 fused_ordering(781) 00:13:59.138 fused_ordering(782) 00:13:59.138 fused_ordering(783) 00:13:59.138 fused_ordering(784) 00:13:59.138 fused_ordering(785) 00:13:59.138 fused_ordering(786) 00:13:59.138 fused_ordering(787) 00:13:59.138 fused_ordering(788) 00:13:59.138 fused_ordering(789) 00:13:59.138 fused_ordering(790) 00:13:59.138 fused_ordering(791) 00:13:59.138 fused_ordering(792) 00:13:59.138 fused_ordering(793) 00:13:59.138 fused_ordering(794) 00:13:59.138 fused_ordering(795) 00:13:59.138 fused_ordering(796) 00:13:59.138 fused_ordering(797) 00:13:59.138 fused_ordering(798) 00:13:59.138 fused_ordering(799) 00:13:59.138 fused_ordering(800) 00:13:59.138 fused_ordering(801) 00:13:59.138 fused_ordering(802) 00:13:59.138 fused_ordering(803) 00:13:59.138 fused_ordering(804) 00:13:59.138 fused_ordering(805) 00:13:59.138 fused_ordering(806) 00:13:59.138 fused_ordering(807) 00:13:59.138 fused_ordering(808) 00:13:59.138 fused_ordering(809) 00:13:59.138 fused_ordering(810) 00:13:59.138 fused_ordering(811) 00:13:59.138 fused_ordering(812) 00:13:59.138 fused_ordering(813) 00:13:59.138 fused_ordering(814) 00:13:59.138 fused_ordering(815) 00:13:59.138 fused_ordering(816) 00:13:59.138 fused_ordering(817) 00:13:59.138 fused_ordering(818) 00:13:59.138 fused_ordering(819) 00:13:59.138 fused_ordering(820) 00:13:59.397 fused_ordering(821) 00:13:59.397 fused_ordering(822) 00:13:59.397 fused_ordering(823) 00:13:59.397 fused_ordering(824) 00:13:59.397 fused_ordering(825) 00:13:59.397 fused_ordering(826) 00:13:59.397 fused_ordering(827) 00:13:59.397 fused_ordering(828) 00:13:59.397 fused_ordering(829) 00:13:59.397 fused_ordering(830) 00:13:59.397 fused_ordering(831) 00:13:59.397 fused_ordering(832) 00:13:59.397 fused_ordering(833) 00:13:59.397 fused_ordering(834) 00:13:59.397 fused_ordering(835) 00:13:59.397 fused_ordering(836) 00:13:59.397 fused_ordering(837) 00:13:59.397 fused_ordering(838) 00:13:59.397 fused_ordering(839) 00:13:59.397 fused_ordering(840) 00:13:59.397 fused_ordering(841) 00:13:59.397 fused_ordering(842) 00:13:59.397 fused_ordering(843) 00:13:59.397 fused_ordering(844) 00:13:59.397 fused_ordering(845) 00:13:59.397 fused_ordering(846) 00:13:59.397 fused_ordering(847) 00:13:59.397 fused_ordering(848) 00:13:59.397 fused_ordering(849) 00:13:59.397 fused_ordering(850) 00:13:59.397 fused_ordering(851) 00:13:59.397 fused_ordering(852) 00:13:59.397 fused_ordering(853) 00:13:59.397 fused_ordering(854) 00:13:59.397 fused_ordering(855) 00:13:59.397 fused_ordering(856) 00:13:59.397 fused_ordering(857) 00:13:59.397 fused_ordering(858) 00:13:59.397 fused_ordering(859) 00:13:59.397 fused_ordering(860) 00:13:59.397 fused_ordering(861) 00:13:59.397 fused_ordering(862) 00:13:59.397 fused_ordering(863) 00:13:59.397 fused_ordering(864) 00:13:59.397 fused_ordering(865) 00:13:59.397 fused_ordering(866) 00:13:59.397 fused_ordering(867) 00:13:59.397 fused_ordering(868) 00:13:59.397 fused_ordering(869) 00:13:59.397 fused_ordering(870) 00:13:59.397 fused_ordering(871) 00:13:59.397 fused_ordering(872) 00:13:59.397 fused_ordering(873) 00:13:59.397 fused_ordering(874) 00:13:59.397 fused_ordering(875) 00:13:59.397 fused_ordering(876) 00:13:59.397 fused_ordering(877) 00:13:59.397 fused_ordering(878) 00:13:59.397 fused_ordering(879) 00:13:59.397 fused_ordering(880) 00:13:59.397 fused_ordering(881) 00:13:59.397 fused_ordering(882) 00:13:59.397 fused_ordering(883) 00:13:59.397 fused_ordering(884) 00:13:59.397 fused_ordering(885) 00:13:59.397 fused_ordering(886) 00:13:59.397 fused_ordering(887) 00:13:59.397 fused_ordering(888) 00:13:59.397 fused_ordering(889) 00:13:59.397 fused_ordering(890) 00:13:59.397 fused_ordering(891) 00:13:59.397 fused_ordering(892) 00:13:59.397 fused_ordering(893) 00:13:59.397 fused_ordering(894) 00:13:59.397 fused_ordering(895) 00:13:59.397 fused_ordering(896) 00:13:59.397 fused_ordering(897) 00:13:59.397 fused_ordering(898) 00:13:59.397 fused_ordering(899) 00:13:59.397 fused_ordering(900) 00:13:59.397 fused_ordering(901) 00:13:59.397 fused_ordering(902) 00:13:59.397 fused_ordering(903) 00:13:59.397 fused_ordering(904) 00:13:59.397 fused_ordering(905) 00:13:59.397 fused_ordering(906) 00:13:59.397 fused_ordering(907) 00:13:59.397 fused_ordering(908) 00:13:59.397 fused_ordering(909) 00:13:59.397 fused_ordering(910) 00:13:59.397 fused_ordering(911) 00:13:59.397 fused_ordering(912) 00:13:59.397 fused_ordering(913) 00:13:59.397 fused_ordering(914) 00:13:59.397 fused_ordering(915) 00:13:59.397 fused_ordering(916) 00:13:59.397 fused_ordering(917) 00:13:59.397 fused_ordering(918) 00:13:59.397 fused_ordering(919) 00:13:59.397 fused_ordering(920) 00:13:59.397 fused_ordering(921) 00:13:59.397 fused_ordering(922) 00:13:59.397 fused_ordering(923) 00:13:59.397 fused_ordering(924) 00:13:59.397 fused_ordering(925) 00:13:59.397 fused_ordering(926) 00:13:59.397 fused_ordering(927) 00:13:59.397 fused_ordering(928) 00:13:59.397 fused_ordering(929) 00:13:59.397 fused_ordering(930) 00:13:59.397 fused_ordering(931) 00:13:59.397 fused_ordering(932) 00:13:59.397 fused_ordering(933) 00:13:59.397 fused_ordering(934) 00:13:59.397 fused_ordering(935) 00:13:59.397 fused_ordering(936) 00:13:59.397 fused_ordering(937) 00:13:59.397 fused_ordering(938) 00:13:59.397 fused_ordering(939) 00:13:59.397 fused_ordering(940) 00:13:59.397 fused_ordering(941) 00:13:59.397 fused_ordering(942) 00:13:59.397 fused_ordering(943) 00:13:59.397 fused_ordering(944) 00:13:59.397 fused_ordering(945) 00:13:59.397 fused_ordering(946) 00:13:59.397 fused_ordering(947) 00:13:59.397 fused_ordering(948) 00:13:59.397 fused_ordering(949) 00:13:59.397 fused_ordering(950) 00:13:59.397 fused_ordering(951) 00:13:59.397 fused_ordering(952) 00:13:59.397 fused_ordering(953) 00:13:59.397 fused_ordering(954) 00:13:59.397 fused_ordering(955) 00:13:59.397 fused_ordering(956) 00:13:59.397 fused_ordering(957) 00:13:59.397 fused_ordering(958) 00:13:59.397 fused_ordering(959) 00:13:59.397 fused_ordering(960) 00:13:59.397 fused_ordering(961) 00:13:59.397 fused_ordering(962) 00:13:59.397 fused_ordering(963) 00:13:59.397 fused_ordering(964) 00:13:59.397 fused_ordering(965) 00:13:59.397 fused_ordering(966) 00:13:59.397 fused_ordering(967) 00:13:59.397 fused_ordering(968) 00:13:59.397 fused_ordering(969) 00:13:59.397 fused_ordering(970) 00:13:59.397 fused_ordering(971) 00:13:59.397 fused_ordering(972) 00:13:59.397 fused_ordering(973) 00:13:59.397 fused_ordering(974) 00:13:59.397 fused_ordering(975) 00:13:59.397 fused_ordering(976) 00:13:59.397 fused_ordering(977) 00:13:59.397 fused_ordering(978) 00:13:59.397 fused_ordering(979) 00:13:59.397 fused_ordering(980) 00:13:59.397 fused_ordering(981) 00:13:59.397 fused_ordering(982) 00:13:59.397 fused_ordering(983) 00:13:59.397 fused_ordering(984) 00:13:59.397 fused_ordering(985) 00:13:59.397 fused_ordering(986) 00:13:59.397 fused_ordering(987) 00:13:59.397 fused_ordering(988) 00:13:59.398 fused_ordering(989) 00:13:59.398 fused_ordering(990) 00:13:59.398 fused_ordering(991) 00:13:59.398 fused_ordering(992) 00:13:59.398 fused_ordering(993) 00:13:59.398 fused_ordering(994) 00:13:59.398 fused_ordering(995) 00:13:59.398 fused_ordering(996) 00:13:59.398 fused_ordering(997) 00:13:59.398 fused_ordering(998) 00:13:59.398 fused_ordering(999) 00:13:59.398 fused_ordering(1000) 00:13:59.398 fused_ordering(1001) 00:13:59.398 fused_ordering(1002) 00:13:59.398 fused_ordering(1003) 00:13:59.398 fused_ordering(1004) 00:13:59.398 fused_ordering(1005) 00:13:59.398 fused_ordering(1006) 00:13:59.398 fused_ordering(1007) 00:13:59.398 fused_ordering(1008) 00:13:59.398 fused_ordering(1009) 00:13:59.398 fused_ordering(1010) 00:13:59.398 fused_ordering(1011) 00:13:59.398 fused_ordering(1012) 00:13:59.398 fused_ordering(1013) 00:13:59.398 fused_ordering(1014) 00:13:59.398 fused_ordering(1015) 00:13:59.398 fused_ordering(1016) 00:13:59.398 fused_ordering(1017) 00:13:59.398 fused_ordering(1018) 00:13:59.398 fused_ordering(1019) 00:13:59.398 fused_ordering(1020) 00:13:59.398 fused_ordering(1021) 00:13:59.398 fused_ordering(1022) 00:13:59.398 fused_ordering(1023) 00:13:59.398 19:34:46 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:59.398 19:34:46 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:59.398 19:34:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:59.398 19:34:46 -- nvmf/common.sh@116 -- # sync 00:13:59.398 19:34:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:59.398 19:34:46 -- nvmf/common.sh@119 -- # set +e 00:13:59.398 19:34:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:59.398 19:34:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:59.398 rmmod nvme_tcp 00:13:59.398 rmmod nvme_fabrics 00:13:59.727 rmmod nvme_keyring 00:13:59.727 19:34:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:59.727 19:34:46 -- nvmf/common.sh@123 -- # set -e 00:13:59.727 19:34:46 -- nvmf/common.sh@124 -- # return 0 00:13:59.727 19:34:46 -- nvmf/common.sh@477 -- # '[' -n 82011 ']' 00:13:59.727 19:34:46 -- nvmf/common.sh@478 -- # killprocess 82011 00:13:59.727 19:34:46 -- common/autotest_common.sh@936 -- # '[' -z 82011 ']' 00:13:59.727 19:34:46 -- common/autotest_common.sh@940 -- # kill -0 82011 00:13:59.727 19:34:46 -- common/autotest_common.sh@941 -- # uname 00:13:59.727 19:34:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:59.727 19:34:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82011 00:13:59.727 killing process with pid 82011 00:13:59.727 19:34:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:59.727 19:34:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:59.727 19:34:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82011' 00:13:59.727 19:34:46 -- common/autotest_common.sh@955 -- # kill 82011 00:13:59.727 19:34:46 -- common/autotest_common.sh@960 -- # wait 82011 00:13:59.987 19:34:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:59.987 19:34:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:59.987 19:34:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:59.987 19:34:46 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:59.987 19:34:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:59.987 19:34:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.987 19:34:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:59.987 19:34:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.987 19:34:46 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:59.987 ************************************ 00:13:59.987 END TEST nvmf_fused_ordering 00:13:59.987 ************************************ 00:13:59.987 00:13:59.987 real 0m3.931s 00:13:59.987 user 0m4.436s 00:13:59.987 sys 0m1.361s 00:13:59.987 19:34:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:59.987 19:34:46 -- common/autotest_common.sh@10 -- # set +x 00:13:59.987 19:34:46 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:59.987 19:34:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:59.987 19:34:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:59.987 19:34:46 -- common/autotest_common.sh@10 -- # set +x 00:13:59.987 ************************************ 00:13:59.987 START TEST nvmf_delete_subsystem 00:13:59.987 ************************************ 00:13:59.987 19:34:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:59.987 * Looking for test storage... 00:13:59.987 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:59.987 19:34:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:59.987 19:34:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:59.987 19:34:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:59.987 19:34:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:59.987 19:34:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:59.987 19:34:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:59.987 19:34:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:59.987 19:34:46 -- scripts/common.sh@335 -- # IFS=.-: 00:13:59.987 19:34:46 -- scripts/common.sh@335 -- # read -ra ver1 00:13:59.987 19:34:46 -- scripts/common.sh@336 -- # IFS=.-: 00:13:59.987 19:34:46 -- scripts/common.sh@336 -- # read -ra ver2 00:13:59.987 19:34:46 -- scripts/common.sh@337 -- # local 'op=<' 00:13:59.987 19:34:46 -- scripts/common.sh@339 -- # ver1_l=2 00:13:59.987 19:34:46 -- scripts/common.sh@340 -- # ver2_l=1 00:13:59.987 19:34:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:59.987 19:34:46 -- scripts/common.sh@343 -- # case "$op" in 00:13:59.987 19:34:46 -- scripts/common.sh@344 -- # : 1 00:13:59.987 19:34:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:59.987 19:34:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:59.987 19:34:46 -- scripts/common.sh@364 -- # decimal 1 00:13:59.987 19:34:46 -- scripts/common.sh@352 -- # local d=1 00:13:59.987 19:34:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:59.987 19:34:46 -- scripts/common.sh@354 -- # echo 1 00:13:59.987 19:34:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:59.987 19:34:46 -- scripts/common.sh@365 -- # decimal 2 00:13:59.987 19:34:46 -- scripts/common.sh@352 -- # local d=2 00:13:59.987 19:34:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:59.987 19:34:46 -- scripts/common.sh@354 -- # echo 2 00:13:59.987 19:34:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:59.987 19:34:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:59.987 19:34:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:59.987 19:34:46 -- scripts/common.sh@367 -- # return 0 00:13:59.987 19:34:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:59.987 19:34:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:59.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.987 --rc genhtml_branch_coverage=1 00:13:59.987 --rc genhtml_function_coverage=1 00:13:59.987 --rc genhtml_legend=1 00:13:59.987 --rc geninfo_all_blocks=1 00:13:59.987 --rc geninfo_unexecuted_blocks=1 00:13:59.987 00:13:59.987 ' 00:13:59.987 19:34:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:59.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.987 --rc genhtml_branch_coverage=1 00:13:59.987 --rc genhtml_function_coverage=1 00:13:59.987 --rc genhtml_legend=1 00:13:59.987 --rc geninfo_all_blocks=1 00:13:59.987 --rc geninfo_unexecuted_blocks=1 00:13:59.987 00:13:59.987 ' 00:13:59.987 19:34:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:59.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.987 --rc genhtml_branch_coverage=1 00:13:59.987 --rc genhtml_function_coverage=1 00:13:59.987 --rc genhtml_legend=1 00:13:59.987 --rc geninfo_all_blocks=1 00:13:59.987 --rc geninfo_unexecuted_blocks=1 00:13:59.987 00:13:59.987 ' 00:13:59.987 19:34:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:59.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.987 --rc genhtml_branch_coverage=1 00:13:59.987 --rc genhtml_function_coverage=1 00:13:59.987 --rc genhtml_legend=1 00:13:59.987 --rc geninfo_all_blocks=1 00:13:59.987 --rc geninfo_unexecuted_blocks=1 00:13:59.987 00:13:59.987 ' 00:13:59.987 19:34:46 -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:59.987 19:34:46 -- nvmf/common.sh@7 -- # uname -s 00:13:59.987 19:34:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:59.987 19:34:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:59.987 19:34:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:59.987 19:34:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:59.987 19:34:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:59.987 19:34:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:59.987 19:34:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:59.987 19:34:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:59.987 19:34:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:59.987 19:34:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:00.247 19:34:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:14:00.247 19:34:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:14:00.247 19:34:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:00.247 19:34:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:00.247 19:34:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:00.247 19:34:46 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:00.247 19:34:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.247 19:34:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.247 19:34:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.247 19:34:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.247 19:34:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.247 19:34:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.247 19:34:46 -- paths/export.sh@5 -- # export PATH 00:14:00.247 19:34:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.247 19:34:46 -- nvmf/common.sh@46 -- # : 0 00:14:00.247 19:34:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:00.247 19:34:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:00.247 19:34:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:00.247 19:34:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:00.247 19:34:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:00.247 19:34:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:00.247 19:34:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:00.247 19:34:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:00.247 19:34:46 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:00.247 19:34:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:00.247 19:34:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:00.247 19:34:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:00.247 19:34:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:00.247 19:34:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:00.247 19:34:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.247 19:34:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:00.247 19:34:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.247 19:34:46 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:00.247 19:34:46 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:00.247 19:34:46 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:00.247 19:34:46 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:00.247 19:34:46 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:00.247 19:34:46 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:00.247 19:34:46 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:00.247 19:34:46 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:00.247 19:34:46 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:00.247 19:34:46 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:00.247 19:34:46 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:00.247 19:34:46 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:00.247 19:34:46 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:00.247 19:34:46 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:00.247 19:34:46 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:00.247 19:34:46 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:00.247 19:34:46 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:00.247 19:34:46 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:00.247 19:34:46 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:00.247 19:34:46 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:00.247 Cannot find device "nvmf_tgt_br" 00:14:00.247 19:34:46 -- nvmf/common.sh@154 -- # true 00:14:00.247 19:34:46 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:00.247 Cannot find device "nvmf_tgt_br2" 00:14:00.247 19:34:46 -- nvmf/common.sh@155 -- # true 00:14:00.247 19:34:46 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:00.247 19:34:46 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:00.247 Cannot find device "nvmf_tgt_br" 00:14:00.247 19:34:46 -- nvmf/common.sh@157 -- # true 00:14:00.247 19:34:46 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:00.247 Cannot find device "nvmf_tgt_br2" 00:14:00.247 19:34:46 -- nvmf/common.sh@158 -- # true 00:14:00.247 19:34:46 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:00.247 19:34:47 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:00.247 19:34:47 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:00.247 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:00.247 19:34:47 -- nvmf/common.sh@161 -- # true 00:14:00.247 19:34:47 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:00.247 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:00.247 19:34:47 -- nvmf/common.sh@162 -- # true 00:14:00.247 19:34:47 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:00.247 19:34:47 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:00.247 19:34:47 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:00.247 19:34:47 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:00.247 19:34:47 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:00.247 19:34:47 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:00.506 19:34:47 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:00.506 19:34:47 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:00.506 19:34:47 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:00.506 19:34:47 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:00.506 19:34:47 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:00.506 19:34:47 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:00.506 19:34:47 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:00.506 19:34:47 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:00.506 19:34:47 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:00.506 19:34:47 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:00.506 19:34:47 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:00.506 19:34:47 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:00.506 19:34:47 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:00.506 19:34:47 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:00.506 19:34:47 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:00.506 19:34:47 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:00.506 19:34:47 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:00.506 19:34:47 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:00.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:00.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:14:00.506 00:14:00.506 --- 10.0.0.2 ping statistics --- 00:14:00.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.506 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:14:00.506 19:34:47 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:00.506 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:00.506 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:14:00.506 00:14:00.506 --- 10.0.0.3 ping statistics --- 00:14:00.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.506 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:14:00.506 19:34:47 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:00.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:00.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:14:00.506 00:14:00.506 --- 10.0.0.1 ping statistics --- 00:14:00.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.507 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:14:00.507 19:34:47 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:00.507 19:34:47 -- nvmf/common.sh@421 -- # return 0 00:14:00.507 19:34:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:00.507 19:34:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:00.507 19:34:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:00.507 19:34:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:00.507 19:34:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:00.507 19:34:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:00.507 19:34:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:00.507 19:34:47 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:00.507 19:34:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:00.507 19:34:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:00.507 19:34:47 -- common/autotest_common.sh@10 -- # set +x 00:14:00.507 19:34:47 -- nvmf/common.sh@469 -- # nvmfpid=82253 00:14:00.507 19:34:47 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:00.507 19:34:47 -- nvmf/common.sh@470 -- # waitforlisten 82253 00:14:00.507 19:34:47 -- common/autotest_common.sh@829 -- # '[' -z 82253 ']' 00:14:00.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.507 19:34:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.507 19:34:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:00.507 19:34:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.507 19:34:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:00.507 19:34:47 -- common/autotest_common.sh@10 -- # set +x 00:14:00.507 [2024-12-15 19:34:47.351483] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:00.507 [2024-12-15 19:34:47.352253] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:00.765 [2024-12-15 19:34:47.494683] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:00.765 [2024-12-15 19:34:47.570897] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:00.765 [2024-12-15 19:34:47.571493] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:00.765 [2024-12-15 19:34:47.571639] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:00.765 [2024-12-15 19:34:47.571789] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:00.765 [2024-12-15 19:34:47.572141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:00.765 [2024-12-15 19:34:47.572157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.701 19:34:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:01.701 19:34:48 -- common/autotest_common.sh@862 -- # return 0 00:14:01.701 19:34:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:01.701 19:34:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:01.701 19:34:48 -- common/autotest_common.sh@10 -- # set +x 00:14:01.701 19:34:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:01.701 19:34:48 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:01.701 19:34:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.701 19:34:48 -- common/autotest_common.sh@10 -- # set +x 00:14:01.701 [2024-12-15 19:34:48.439208] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:01.701 19:34:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.701 19:34:48 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:01.701 19:34:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.701 19:34:48 -- common/autotest_common.sh@10 -- # set +x 00:14:01.701 19:34:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.701 19:34:48 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:01.701 19:34:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.701 19:34:48 -- common/autotest_common.sh@10 -- # set +x 00:14:01.701 [2024-12-15 19:34:48.455747] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:01.701 19:34:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.701 19:34:48 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:01.701 19:34:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.701 19:34:48 -- common/autotest_common.sh@10 -- # set +x 00:14:01.701 NULL1 00:14:01.701 19:34:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.701 19:34:48 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:01.701 19:34:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.701 19:34:48 -- common/autotest_common.sh@10 -- # set +x 00:14:01.701 Delay0 00:14:01.701 19:34:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.701 19:34:48 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:01.701 19:34:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.701 19:34:48 -- common/autotest_common.sh@10 -- # set +x 00:14:01.701 19:34:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.701 19:34:48 -- target/delete_subsystem.sh@28 -- # perf_pid=82304 00:14:01.701 19:34:48 -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:01.701 19:34:48 -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:01.960 [2024-12-15 19:34:48.639913] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:03.863 19:34:50 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:03.863 19:34:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.863 19:34:50 -- common/autotest_common.sh@10 -- # set +x 00:14:03.863 Read completed with error (sct=0, sc=8) 00:14:03.863 Read completed with error (sct=0, sc=8) 00:14:03.863 Write completed with error (sct=0, sc=8) 00:14:03.863 starting I/O failed: -6 00:14:03.863 Read completed with error (sct=0, sc=8) 00:14:03.863 Read completed with error (sct=0, sc=8) 00:14:03.863 Read completed with error (sct=0, sc=8) 00:14:03.863 Read completed with error (sct=0, sc=8) 00:14:03.863 starting I/O failed: -6 00:14:03.863 Read completed with error (sct=0, sc=8) 00:14:03.863 Write completed with error (sct=0, sc=8) 00:14:03.863 Read completed with error (sct=0, sc=8) 00:14:03.863 Read completed with error (sct=0, sc=8) 00:14:03.863 starting I/O failed: -6 00:14:03.863 Read completed with error (sct=0, sc=8) 00:14:03.863 Write completed with error (sct=0, sc=8) 00:14:03.863 Write completed with error (sct=0, sc=8) 00:14:03.863 Write completed with error (sct=0, sc=8) 00:14:03.863 starting I/O failed: -6 00:14:03.863 Read completed with error (sct=0, sc=8) 00:14:03.863 Read completed with error (sct=0, sc=8) 00:14:03.863 Read completed with error (sct=0, sc=8) 00:14:03.863 Read completed with error (sct=0, sc=8) 00:14:03.863 starting I/O failed: -6 00:14:03.863 Read completed with error (sct=0, sc=8) 00:14:03.863 Write completed with error (sct=0, sc=8) 00:14:03.863 Read completed with error (sct=0, sc=8) 00:14:03.863 Read completed with error (sct=0, sc=8) 00:14:03.863 starting I/O failed: -6 00:14:03.863 Read completed with error (sct=0, sc=8) 00:14:03.863 Read completed with error (sct=0, sc=8) 00:14:03.863 Read completed with error (sct=0, sc=8) 00:14:03.863 Read completed with error (sct=0, sc=8) 00:14:03.863 starting I/O failed: -6 00:14:03.863 Read completed with error (sct=0, sc=8) 00:14:03.863 Read completed with error (sct=0, sc=8) 00:14:03.863 Write completed with error (sct=0, sc=8) 00:14:03.863 Write completed with error (sct=0, sc=8) 00:14:03.863 starting I/O failed: -6 00:14:03.863 Write completed with error (sct=0, sc=8) 00:14:03.863 Write completed with error (sct=0, sc=8) 00:14:03.863 Read completed with error (sct=0, sc=8) 00:14:03.863 Read completed with error (sct=0, sc=8) 00:14:03.863 starting I/O failed: -6 00:14:03.863 Read completed with error (sct=0, sc=8) 00:14:03.863 Write completed with error (sct=0, sc=8) 00:14:03.863 Write completed with error (sct=0, sc=8) 00:14:03.863 Read completed with error (sct=0, sc=8) 00:14:03.863 starting I/O failed: -6 00:14:03.863 Read completed with error (sct=0, sc=8) 00:14:03.863 Read completed with error (sct=0, sc=8) 00:14:03.863 Write completed with error (sct=0, sc=8) 00:14:03.863 Read completed with error (sct=0, sc=8) 00:14:03.863 starting I/O failed: -6 00:14:03.863 Read completed with error (sct=0, sc=8) 00:14:03.864 Write completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 starting I/O failed: -6 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 [2024-12-15 19:34:50.672762] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491610 is same with the state(5) to be set 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Write completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Write completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Write completed with error (sct=0, sc=8) 00:14:03.864 Write completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Write completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Write completed with error (sct=0, sc=8) 00:14:03.864 Write completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Write completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Write completed with error (sct=0, sc=8) 00:14:03.864 Write completed with error (sct=0, sc=8) 00:14:03.864 Write completed with error (sct=0, sc=8) 00:14:03.864 Write completed with error (sct=0, sc=8) 00:14:03.864 Write completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Write completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Write completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Write completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Write completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Write completed with error (sct=0, sc=8) 00:14:03.864 starting I/O failed: -6 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Write completed with error (sct=0, sc=8) 00:14:03.864 Write completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 starting I/O failed: -6 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Write completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Write completed with error (sct=0, sc=8) 00:14:03.864 starting I/O failed: -6 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 starting I/O failed: -6 00:14:03.864 Write completed with error (sct=0, sc=8) 00:14:03.864 Write completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Write completed with error (sct=0, sc=8) 00:14:03.864 starting I/O failed: -6 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Write completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 starting I/O failed: -6 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 starting I/O failed: -6 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 starting I/O failed: -6 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 starting I/O failed: -6 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 [2024-12-15 19:34:50.675564] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fda7800c350 is same with the state(5) to be set 00:14:03.864 Write completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Write completed with error (sct=0, sc=8) 00:14:03.864 Write completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Write completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Write completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Write completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Write completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Write completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Write completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Write completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:03.864 Read completed with error (sct=0, sc=8) 00:14:04.799 [2024-12-15 19:34:51.653183] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2454040 is same with the state(5) to be set 00:14:04.799 Write completed with error (sct=0, sc=8) 00:14:04.799 Write completed with error (sct=0, sc=8) 00:14:04.799 Read completed with error (sct=0, sc=8) 00:14:04.799 Read completed with error (sct=0, sc=8) 00:14:04.799 Read completed with error (sct=0, sc=8) 00:14:04.799 Read completed with error (sct=0, sc=8) 00:14:04.799 Read completed with error (sct=0, sc=8) 00:14:04.799 Read completed with error (sct=0, sc=8) 00:14:04.799 Read completed with error (sct=0, sc=8) 00:14:04.799 Read completed with error (sct=0, sc=8) 00:14:04.799 Write completed with error (sct=0, sc=8) 00:14:04.799 Read completed with error (sct=0, sc=8) 00:14:04.799 Write completed with error (sct=0, sc=8) 00:14:04.799 Read completed with error (sct=0, sc=8) 00:14:04.799 Read completed with error (sct=0, sc=8) 00:14:04.799 Read completed with error (sct=0, sc=8) 00:14:04.799 Write completed with error (sct=0, sc=8) 00:14:04.799 Read completed with error (sct=0, sc=8) 00:14:04.799 Write completed with error (sct=0, sc=8) 00:14:04.799 Read completed with error (sct=0, sc=8) 00:14:04.799 Read completed with error (sct=0, sc=8) 00:14:04.800 Read completed with error (sct=0, sc=8) 00:14:04.800 Read completed with error (sct=0, sc=8) 00:14:04.800 Write completed with error (sct=0, sc=8) 00:14:04.800 Read completed with error (sct=0, sc=8) 00:14:04.800 Read completed with error (sct=0, sc=8) 00:14:04.800 Read completed with error (sct=0, sc=8) 00:14:04.800 Read completed with error (sct=0, sc=8) 00:14:04.800 [2024-12-15 19:34:51.673745] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491360 is same with the state(5) to be set 00:14:04.800 Write completed with error (sct=0, sc=8) 00:14:04.800 Read completed with error (sct=0, sc=8) 00:14:04.800 Read completed with error (sct=0, sc=8) 00:14:04.800 Write completed with error (sct=0, sc=8) 00:14:04.800 Read completed with error (sct=0, sc=8) 00:14:04.800 Read completed with error (sct=0, sc=8) 00:14:04.800 Read completed with error (sct=0, sc=8) 00:14:04.800 Write completed with error (sct=0, sc=8) 00:14:04.800 Write completed with error (sct=0, sc=8) 00:14:04.800 Read completed with error (sct=0, sc=8) 00:14:04.800 Read completed with error (sct=0, sc=8) 00:14:04.800 Read completed with error (sct=0, sc=8) 00:14:04.800 Read completed with error (sct=0, sc=8) 00:14:04.800 Read completed with error (sct=0, sc=8) 00:14:04.800 Read completed with error (sct=0, sc=8) 00:14:04.800 Write completed with error (sct=0, sc=8) 00:14:04.800 Read completed with error (sct=0, sc=8) 00:14:04.800 Write completed with error (sct=0, sc=8) 00:14:04.800 Read completed with error (sct=0, sc=8) 00:14:04.800 Read completed with error (sct=0, sc=8) 00:14:04.800 Write completed with error (sct=0, sc=8) 00:14:04.800 Write completed with error (sct=0, sc=8) 00:14:04.800 Write completed with error (sct=0, sc=8) 00:14:04.800 Read completed with error (sct=0, sc=8) 00:14:04.800 Write completed with error (sct=0, sc=8) 00:14:04.800 Read completed with error (sct=0, sc=8) 00:14:04.800 Read completed with error (sct=0, sc=8) 00:14:04.800 Read completed with error (sct=0, sc=8) 00:14:04.800 [2024-12-15 19:34:51.674761] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24918c0 is same with the state(5) to be set 00:14:04.800 Read completed with error (sct=0, sc=8) 00:14:04.800 Write completed with error (sct=0, sc=8) 00:14:04.800 Read completed with error (sct=0, sc=8) 00:14:04.800 Read completed with error (sct=0, sc=8) 00:14:04.800 Read completed with error (sct=0, sc=8) 00:14:04.800 Write completed with error (sct=0, sc=8) 00:14:04.800 Read completed with error (sct=0, sc=8) 00:14:04.800 Read completed with error (sct=0, sc=8) 00:14:04.800 Write completed with error (sct=0, sc=8) 00:14:04.800 Read completed with error (sct=0, sc=8) 00:14:04.800 Read completed with error (sct=0, sc=8) 00:14:04.800 Read completed with error (sct=0, sc=8) 00:14:04.800 Write completed with error (sct=0, sc=8) 00:14:04.800 [2024-12-15 19:34:51.675079] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fda7800c600 is same with the state(5) to be set 00:14:04.800 Read completed with error (sct=0, sc=8) 00:14:04.800 Write completed with error (sct=0, sc=8) 00:14:04.800 Read completed with error (sct=0, sc=8) 00:14:04.800 Read completed with error (sct=0, sc=8) 00:14:04.800 Read completed with error (sct=0, sc=8) 00:14:04.800 Write completed with error (sct=0, sc=8) 00:14:04.800 Read completed with error (sct=0, sc=8) 00:14:04.800 Read completed with error (sct=0, sc=8) 00:14:04.800 Write completed with error (sct=0, sc=8) 00:14:04.800 Read completed with error (sct=0, sc=8) 00:14:04.800 Read completed with error (sct=0, sc=8) 00:14:04.800 Read completed with error (sct=0, sc=8) 00:14:04.800 Read completed with error (sct=0, sc=8) 00:14:04.800 [2024-12-15 19:34:51.675696] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fda7800bf20 is same with the state(5) to be set 00:14:04.800 [2024-12-15 19:34:51.676739] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2454040 (9): Bad file descriptor 00:14:04.800 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:04.800 19:34:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.800 19:34:51 -- target/delete_subsystem.sh@34 -- # delay=0 00:14:04.800 19:34:51 -- target/delete_subsystem.sh@35 -- # kill -0 82304 00:14:04.800 19:34:51 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:04.800 Initializing NVMe Controllers 00:14:04.800 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:04.800 Controller IO queue size 128, less than required. 00:14:04.800 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:04.800 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:04.800 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:04.800 Initialization complete. Launching workers. 00:14:04.800 ======================================================== 00:14:04.800 Latency(us) 00:14:04.800 Device Information : IOPS MiB/s Average min max 00:14:04.800 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 177.01 0.09 880564.90 370.20 1009525.74 00:14:04.800 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 148.67 0.07 948966.54 333.11 1010988.76 00:14:04.800 ======================================================== 00:14:04.800 Total : 325.68 0.16 911789.46 333.11 1010988.76 00:14:04.800 00:14:05.367 19:34:52 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:05.367 19:34:52 -- target/delete_subsystem.sh@35 -- # kill -0 82304 00:14:05.367 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (82304) - No such process 00:14:05.367 19:34:52 -- target/delete_subsystem.sh@45 -- # NOT wait 82304 00:14:05.367 19:34:52 -- common/autotest_common.sh@650 -- # local es=0 00:14:05.367 19:34:52 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 82304 00:14:05.367 19:34:52 -- common/autotest_common.sh@638 -- # local arg=wait 00:14:05.367 19:34:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:05.367 19:34:52 -- common/autotest_common.sh@642 -- # type -t wait 00:14:05.367 19:34:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:05.367 19:34:52 -- common/autotest_common.sh@653 -- # wait 82304 00:14:05.367 19:34:52 -- common/autotest_common.sh@653 -- # es=1 00:14:05.367 19:34:52 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:05.367 19:34:52 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:05.367 19:34:52 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:05.367 19:34:52 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:05.367 19:34:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.367 19:34:52 -- common/autotest_common.sh@10 -- # set +x 00:14:05.367 19:34:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.367 19:34:52 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:05.367 19:34:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.367 19:34:52 -- common/autotest_common.sh@10 -- # set +x 00:14:05.367 [2024-12-15 19:34:52.202787] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:05.367 19:34:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.367 19:34:52 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:05.367 19:34:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.367 19:34:52 -- common/autotest_common.sh@10 -- # set +x 00:14:05.367 19:34:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.367 19:34:52 -- target/delete_subsystem.sh@54 -- # perf_pid=82349 00:14:05.367 19:34:52 -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:05.367 19:34:52 -- target/delete_subsystem.sh@56 -- # delay=0 00:14:05.367 19:34:52 -- target/delete_subsystem.sh@57 -- # kill -0 82349 00:14:05.367 19:34:52 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:05.626 [2024-12-15 19:34:52.372001] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:05.884 19:34:52 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:05.884 19:34:52 -- target/delete_subsystem.sh@57 -- # kill -0 82349 00:14:05.884 19:34:52 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:06.450 19:34:53 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:06.450 19:34:53 -- target/delete_subsystem.sh@57 -- # kill -0 82349 00:14:06.450 19:34:53 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:07.017 19:34:53 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:07.017 19:34:53 -- target/delete_subsystem.sh@57 -- # kill -0 82349 00:14:07.017 19:34:53 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:07.584 19:34:54 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:07.584 19:34:54 -- target/delete_subsystem.sh@57 -- # kill -0 82349 00:14:07.584 19:34:54 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:08.152 19:34:54 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:08.152 19:34:54 -- target/delete_subsystem.sh@57 -- # kill -0 82349 00:14:08.152 19:34:54 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:08.413 19:34:55 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:08.413 19:34:55 -- target/delete_subsystem.sh@57 -- # kill -0 82349 00:14:08.413 19:34:55 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:08.673 Initializing NVMe Controllers 00:14:08.673 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:08.673 Controller IO queue size 128, less than required. 00:14:08.673 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:08.673 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:08.673 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:08.673 Initialization complete. Launching workers. 00:14:08.673 ======================================================== 00:14:08.673 Latency(us) 00:14:08.673 Device Information : IOPS MiB/s Average min max 00:14:08.673 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003178.89 1000113.07 1040421.42 00:14:08.673 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005985.80 1000933.59 1013809.40 00:14:08.673 ======================================================== 00:14:08.673 Total : 256.00 0.12 1004582.34 1000113.07 1040421.42 00:14:08.673 00:14:08.931 19:34:55 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:08.931 19:34:55 -- target/delete_subsystem.sh@57 -- # kill -0 82349 00:14:08.931 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (82349) - No such process 00:14:08.931 19:34:55 -- target/delete_subsystem.sh@67 -- # wait 82349 00:14:08.931 19:34:55 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:08.931 19:34:55 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:08.931 19:34:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:08.931 19:34:55 -- nvmf/common.sh@116 -- # sync 00:14:08.931 19:34:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:08.931 19:34:55 -- nvmf/common.sh@119 -- # set +e 00:14:08.931 19:34:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:08.931 19:34:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:08.931 rmmod nvme_tcp 00:14:08.931 rmmod nvme_fabrics 00:14:09.190 rmmod nvme_keyring 00:14:09.190 19:34:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:09.190 19:34:55 -- nvmf/common.sh@123 -- # set -e 00:14:09.190 19:34:55 -- nvmf/common.sh@124 -- # return 0 00:14:09.190 19:34:55 -- nvmf/common.sh@477 -- # '[' -n 82253 ']' 00:14:09.190 19:34:55 -- nvmf/common.sh@478 -- # killprocess 82253 00:14:09.190 19:34:55 -- common/autotest_common.sh@936 -- # '[' -z 82253 ']' 00:14:09.190 19:34:55 -- common/autotest_common.sh@940 -- # kill -0 82253 00:14:09.190 19:34:55 -- common/autotest_common.sh@941 -- # uname 00:14:09.190 19:34:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:09.190 19:34:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82253 00:14:09.190 killing process with pid 82253 00:14:09.190 19:34:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:09.190 19:34:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:09.190 19:34:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82253' 00:14:09.190 19:34:55 -- common/autotest_common.sh@955 -- # kill 82253 00:14:09.190 19:34:55 -- common/autotest_common.sh@960 -- # wait 82253 00:14:09.449 19:34:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:09.449 19:34:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:09.449 19:34:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:09.449 19:34:56 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:09.449 19:34:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:09.449 19:34:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.449 19:34:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:09.449 19:34:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.449 19:34:56 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:09.449 ************************************ 00:14:09.449 END TEST nvmf_delete_subsystem 00:14:09.449 ************************************ 00:14:09.449 00:14:09.449 real 0m9.460s 00:14:09.449 user 0m28.981s 00:14:09.449 sys 0m1.556s 00:14:09.449 19:34:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:09.449 19:34:56 -- common/autotest_common.sh@10 -- # set +x 00:14:09.449 19:34:56 -- nvmf/nvmf.sh@36 -- # [[ 0 -eq 1 ]] 00:14:09.449 19:34:56 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:14:09.449 19:34:56 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:09.449 19:34:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:09.449 19:34:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:09.449 19:34:56 -- common/autotest_common.sh@10 -- # set +x 00:14:09.449 ************************************ 00:14:09.449 START TEST nvmf_host_management 00:14:09.449 ************************************ 00:14:09.449 19:34:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:09.449 * Looking for test storage... 00:14:09.449 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:09.449 19:34:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:09.449 19:34:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:09.449 19:34:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:09.708 19:34:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:09.708 19:34:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:09.708 19:34:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:09.708 19:34:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:09.708 19:34:56 -- scripts/common.sh@335 -- # IFS=.-: 00:14:09.708 19:34:56 -- scripts/common.sh@335 -- # read -ra ver1 00:14:09.708 19:34:56 -- scripts/common.sh@336 -- # IFS=.-: 00:14:09.708 19:34:56 -- scripts/common.sh@336 -- # read -ra ver2 00:14:09.708 19:34:56 -- scripts/common.sh@337 -- # local 'op=<' 00:14:09.708 19:34:56 -- scripts/common.sh@339 -- # ver1_l=2 00:14:09.708 19:34:56 -- scripts/common.sh@340 -- # ver2_l=1 00:14:09.708 19:34:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:09.708 19:34:56 -- scripts/common.sh@343 -- # case "$op" in 00:14:09.708 19:34:56 -- scripts/common.sh@344 -- # : 1 00:14:09.708 19:34:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:09.708 19:34:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:09.708 19:34:56 -- scripts/common.sh@364 -- # decimal 1 00:14:09.708 19:34:56 -- scripts/common.sh@352 -- # local d=1 00:14:09.708 19:34:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:09.708 19:34:56 -- scripts/common.sh@354 -- # echo 1 00:14:09.708 19:34:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:09.708 19:34:56 -- scripts/common.sh@365 -- # decimal 2 00:14:09.708 19:34:56 -- scripts/common.sh@352 -- # local d=2 00:14:09.708 19:34:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:09.708 19:34:56 -- scripts/common.sh@354 -- # echo 2 00:14:09.708 19:34:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:09.708 19:34:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:09.708 19:34:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:09.708 19:34:56 -- scripts/common.sh@367 -- # return 0 00:14:09.708 19:34:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:09.708 19:34:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:09.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.708 --rc genhtml_branch_coverage=1 00:14:09.708 --rc genhtml_function_coverage=1 00:14:09.708 --rc genhtml_legend=1 00:14:09.708 --rc geninfo_all_blocks=1 00:14:09.708 --rc geninfo_unexecuted_blocks=1 00:14:09.708 00:14:09.708 ' 00:14:09.708 19:34:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:09.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.708 --rc genhtml_branch_coverage=1 00:14:09.708 --rc genhtml_function_coverage=1 00:14:09.708 --rc genhtml_legend=1 00:14:09.708 --rc geninfo_all_blocks=1 00:14:09.708 --rc geninfo_unexecuted_blocks=1 00:14:09.708 00:14:09.708 ' 00:14:09.708 19:34:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:09.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.708 --rc genhtml_branch_coverage=1 00:14:09.708 --rc genhtml_function_coverage=1 00:14:09.708 --rc genhtml_legend=1 00:14:09.708 --rc geninfo_all_blocks=1 00:14:09.708 --rc geninfo_unexecuted_blocks=1 00:14:09.709 00:14:09.709 ' 00:14:09.709 19:34:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:09.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.709 --rc genhtml_branch_coverage=1 00:14:09.709 --rc genhtml_function_coverage=1 00:14:09.709 --rc genhtml_legend=1 00:14:09.709 --rc geninfo_all_blocks=1 00:14:09.709 --rc geninfo_unexecuted_blocks=1 00:14:09.709 00:14:09.709 ' 00:14:09.709 19:34:56 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:09.709 19:34:56 -- nvmf/common.sh@7 -- # uname -s 00:14:09.709 19:34:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:09.709 19:34:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:09.709 19:34:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:09.709 19:34:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:09.709 19:34:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:09.709 19:34:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:09.709 19:34:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:09.709 19:34:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:09.709 19:34:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:09.709 19:34:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:09.709 19:34:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:14:09.709 19:34:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:14:09.709 19:34:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:09.709 19:34:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:09.709 19:34:56 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:09.709 19:34:56 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:09.709 19:34:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:09.709 19:34:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:09.709 19:34:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:09.709 19:34:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.709 19:34:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.709 19:34:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.709 19:34:56 -- paths/export.sh@5 -- # export PATH 00:14:09.709 19:34:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.709 19:34:56 -- nvmf/common.sh@46 -- # : 0 00:14:09.709 19:34:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:09.709 19:34:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:09.709 19:34:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:09.709 19:34:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:09.709 19:34:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:09.709 19:34:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:09.709 19:34:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:09.709 19:34:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:09.709 19:34:56 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:09.709 19:34:56 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:09.709 19:34:56 -- target/host_management.sh@104 -- # nvmftestinit 00:14:09.709 19:34:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:09.709 19:34:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:09.709 19:34:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:09.709 19:34:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:09.709 19:34:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:09.709 19:34:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.709 19:34:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:09.709 19:34:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.709 19:34:56 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:09.709 19:34:56 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:09.709 19:34:56 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:09.709 19:34:56 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:09.709 19:34:56 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:09.709 19:34:56 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:09.709 19:34:56 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:09.709 19:34:56 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:09.709 19:34:56 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:09.709 19:34:56 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:09.709 19:34:56 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:09.709 19:34:56 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:09.709 19:34:56 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:09.709 19:34:56 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:09.709 19:34:56 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:09.709 19:34:56 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:09.709 19:34:56 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:09.709 19:34:56 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:09.709 19:34:56 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:09.709 19:34:56 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:09.709 Cannot find device "nvmf_tgt_br" 00:14:09.709 19:34:56 -- nvmf/common.sh@154 -- # true 00:14:09.709 19:34:56 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:09.709 Cannot find device "nvmf_tgt_br2" 00:14:09.709 19:34:56 -- nvmf/common.sh@155 -- # true 00:14:09.709 19:34:56 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:09.709 19:34:56 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:09.709 Cannot find device "nvmf_tgt_br" 00:14:09.709 19:34:56 -- nvmf/common.sh@157 -- # true 00:14:09.709 19:34:56 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:09.709 Cannot find device "nvmf_tgt_br2" 00:14:09.709 19:34:56 -- nvmf/common.sh@158 -- # true 00:14:09.709 19:34:56 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:09.709 19:34:56 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:09.709 19:34:56 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:09.709 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:09.709 19:34:56 -- nvmf/common.sh@161 -- # true 00:14:09.709 19:34:56 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:09.709 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:09.709 19:34:56 -- nvmf/common.sh@162 -- # true 00:14:09.709 19:34:56 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:09.709 19:34:56 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:09.709 19:34:56 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:09.709 19:34:56 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:09.709 19:34:56 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:09.968 19:34:56 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:09.968 19:34:56 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:09.968 19:34:56 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:09.968 19:34:56 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:09.968 19:34:56 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:09.968 19:34:56 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:09.968 19:34:56 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:09.968 19:34:56 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:09.968 19:34:56 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:09.968 19:34:56 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:09.968 19:34:56 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:09.968 19:34:56 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:09.968 19:34:56 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:09.968 19:34:56 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:09.968 19:34:56 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:09.968 19:34:56 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:09.968 19:34:56 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:09.968 19:34:56 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:09.968 19:34:56 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:09.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:09.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:14:09.968 00:14:09.968 --- 10.0.0.2 ping statistics --- 00:14:09.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.968 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:14:09.968 19:34:56 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:09.968 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:09.968 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:14:09.968 00:14:09.968 --- 10.0.0.3 ping statistics --- 00:14:09.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.968 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:14:09.968 19:34:56 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:09.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:09.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:09.968 00:14:09.968 --- 10.0.0.1 ping statistics --- 00:14:09.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.968 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:09.968 19:34:56 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:09.968 19:34:56 -- nvmf/common.sh@421 -- # return 0 00:14:09.968 19:34:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:09.968 19:34:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:09.968 19:34:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:09.968 19:34:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:09.968 19:34:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:09.968 19:34:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:09.968 19:34:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:09.968 19:34:56 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:14:09.968 19:34:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:09.968 19:34:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:09.968 19:34:56 -- common/autotest_common.sh@10 -- # set +x 00:14:09.968 ************************************ 00:14:09.968 START TEST nvmf_host_management 00:14:09.968 ************************************ 00:14:09.968 19:34:56 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:14:09.968 19:34:56 -- target/host_management.sh@69 -- # starttarget 00:14:09.968 19:34:56 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:09.968 19:34:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:09.968 19:34:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:09.968 19:34:56 -- common/autotest_common.sh@10 -- # set +x 00:14:09.968 19:34:56 -- nvmf/common.sh@469 -- # nvmfpid=82598 00:14:09.968 19:34:56 -- nvmf/common.sh@470 -- # waitforlisten 82598 00:14:09.968 19:34:56 -- common/autotest_common.sh@829 -- # '[' -z 82598 ']' 00:14:09.968 19:34:56 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:09.968 19:34:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.968 19:34:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:09.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.968 19:34:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.969 19:34:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:09.969 19:34:56 -- common/autotest_common.sh@10 -- # set +x 00:14:09.969 [2024-12-15 19:34:56.842551] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:09.969 [2024-12-15 19:34:56.843159] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.227 [2024-12-15 19:34:56.979670] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:10.227 [2024-12-15 19:34:57.052747] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:10.227 [2024-12-15 19:34:57.053272] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:10.227 [2024-12-15 19:34:57.053323] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:10.227 [2024-12-15 19:34:57.053575] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:10.227 [2024-12-15 19:34:57.053787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:10.227 [2024-12-15 19:34:57.054524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:10.227 [2024-12-15 19:34:57.054660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:10.227 [2024-12-15 19:34:57.054876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.161 19:34:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:11.161 19:34:57 -- common/autotest_common.sh@862 -- # return 0 00:14:11.161 19:34:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:11.161 19:34:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:11.161 19:34:57 -- common/autotest_common.sh@10 -- # set +x 00:14:11.161 19:34:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.161 19:34:57 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:11.161 19:34:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.161 19:34:57 -- common/autotest_common.sh@10 -- # set +x 00:14:11.161 [2024-12-15 19:34:57.861098] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:11.161 19:34:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.161 19:34:57 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:11.161 19:34:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:11.161 19:34:57 -- common/autotest_common.sh@10 -- # set +x 00:14:11.161 19:34:57 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:14:11.161 19:34:57 -- target/host_management.sh@23 -- # cat 00:14:11.161 19:34:57 -- target/host_management.sh@30 -- # rpc_cmd 00:14:11.161 19:34:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.161 19:34:57 -- common/autotest_common.sh@10 -- # set +x 00:14:11.161 Malloc0 00:14:11.161 [2024-12-15 19:34:57.953709] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:11.161 19:34:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.161 19:34:57 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:11.161 19:34:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:11.161 19:34:57 -- common/autotest_common.sh@10 -- # set +x 00:14:11.161 19:34:58 -- target/host_management.sh@73 -- # perfpid=82670 00:14:11.161 19:34:58 -- target/host_management.sh@74 -- # waitforlisten 82670 /var/tmp/bdevperf.sock 00:14:11.161 19:34:58 -- common/autotest_common.sh@829 -- # '[' -z 82670 ']' 00:14:11.161 19:34:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:11.161 19:34:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:11.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:11.161 19:34:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:11.161 19:34:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:11.161 19:34:58 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:11.161 19:34:58 -- common/autotest_common.sh@10 -- # set +x 00:14:11.161 19:34:58 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:11.161 19:34:58 -- nvmf/common.sh@520 -- # config=() 00:14:11.161 19:34:58 -- nvmf/common.sh@520 -- # local subsystem config 00:14:11.161 19:34:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:11.161 19:34:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:11.161 { 00:14:11.161 "params": { 00:14:11.161 "name": "Nvme$subsystem", 00:14:11.161 "trtype": "$TEST_TRANSPORT", 00:14:11.161 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:11.161 "adrfam": "ipv4", 00:14:11.161 "trsvcid": "$NVMF_PORT", 00:14:11.161 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:11.161 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:11.161 "hdgst": ${hdgst:-false}, 00:14:11.161 "ddgst": ${ddgst:-false} 00:14:11.161 }, 00:14:11.161 "method": "bdev_nvme_attach_controller" 00:14:11.161 } 00:14:11.161 EOF 00:14:11.161 )") 00:14:11.161 19:34:58 -- nvmf/common.sh@542 -- # cat 00:14:11.161 19:34:58 -- nvmf/common.sh@544 -- # jq . 00:14:11.161 19:34:58 -- nvmf/common.sh@545 -- # IFS=, 00:14:11.162 19:34:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:11.162 "params": { 00:14:11.162 "name": "Nvme0", 00:14:11.162 "trtype": "tcp", 00:14:11.162 "traddr": "10.0.0.2", 00:14:11.162 "adrfam": "ipv4", 00:14:11.162 "trsvcid": "4420", 00:14:11.162 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:11.162 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:11.162 "hdgst": false, 00:14:11.162 "ddgst": false 00:14:11.162 }, 00:14:11.162 "method": "bdev_nvme_attach_controller" 00:14:11.162 }' 00:14:11.162 [2024-12-15 19:34:58.052939] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:11.162 [2024-12-15 19:34:58.053022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82670 ] 00:14:11.420 [2024-12-15 19:34:58.186318] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.420 [2024-12-15 19:34:58.260692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.678 Running I/O for 10 seconds... 00:14:12.245 19:34:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:12.245 19:34:59 -- common/autotest_common.sh@862 -- # return 0 00:14:12.245 19:34:59 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:12.245 19:34:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.245 19:34:59 -- common/autotest_common.sh@10 -- # set +x 00:14:12.245 19:34:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.245 19:34:59 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:12.245 19:34:59 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:12.245 19:34:59 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:12.245 19:34:59 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:12.245 19:34:59 -- target/host_management.sh@52 -- # local ret=1 00:14:12.245 19:34:59 -- target/host_management.sh@53 -- # local i 00:14:12.245 19:34:59 -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:12.245 19:34:59 -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:12.245 19:34:59 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:12.245 19:34:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.245 19:34:59 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:12.245 19:34:59 -- common/autotest_common.sh@10 -- # set +x 00:14:12.245 19:34:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.505 19:34:59 -- target/host_management.sh@55 -- # read_io_count=2267 00:14:12.505 19:34:59 -- target/host_management.sh@58 -- # '[' 2267 -ge 100 ']' 00:14:12.505 19:34:59 -- target/host_management.sh@59 -- # ret=0 00:14:12.505 19:34:59 -- target/host_management.sh@60 -- # break 00:14:12.505 19:34:59 -- target/host_management.sh@64 -- # return 0 00:14:12.505 19:34:59 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:12.505 19:34:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.505 19:34:59 -- common/autotest_common.sh@10 -- # set +x 00:14:12.505 [2024-12-15 19:34:59.171778] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711530 is same with the state(5) to be set 00:14:12.505 [2024-12-15 19:34:59.171874] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711530 is same with the state(5) to be set 00:14:12.505 [2024-12-15 19:34:59.171887] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711530 is same with the state(5) to be set 00:14:12.505 [2024-12-15 19:34:59.171896] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711530 is same with the state(5) to be set 00:14:12.505 [2024-12-15 19:34:59.171903] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711530 is same with the state(5) to be set 00:14:12.505 [2024-12-15 19:34:59.171912] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711530 is same with the state(5) to be set 00:14:12.505 [2024-12-15 19:34:59.171920] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711530 is same with the state(5) to be set 00:14:12.505 [2024-12-15 19:34:59.171928] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711530 is same with the state(5) to be set 00:14:12.505 [2024-12-15 19:34:59.171936] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711530 is same with the state(5) to be set 00:14:12.505 [2024-12-15 19:34:59.171943] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711530 is same with the state(5) to be set 00:14:12.505 [2024-12-15 19:34:59.171950] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711530 is same with the state(5) to be set 00:14:12.505 [2024-12-15 19:34:59.171958] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711530 is same with the state(5) to be set 00:14:12.506 [2024-12-15 19:34:59.171965] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711530 is same with the state(5) to be set 00:14:12.506 [2024-12-15 19:34:59.171973] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711530 is same with the state(5) to be set 00:14:12.506 [2024-12-15 19:34:59.171981] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711530 is same with the state(5) to be set 00:14:12.506 [2024-12-15 19:34:59.171988] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711530 is same with the state(5) to be set 00:14:12.506 [2024-12-15 19:34:59.171995] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711530 is same with the state(5) to be set 00:14:12.506 [2024-12-15 19:34:59.172003] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711530 is same with the state(5) to be set 00:14:12.506 [2024-12-15 19:34:59.172010] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711530 is same with the state(5) to be set 00:14:12.506 [2024-12-15 19:34:59.172017] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711530 is same with the state(5) to be set 00:14:12.506 [2024-12-15 19:34:59.172025] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711530 is same with the state(5) to be set 00:14:12.506 [2024-12-15 19:34:59.172032] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711530 is same with the state(5) to be set 00:14:12.506 [2024-12-15 19:34:59.172039] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711530 is same with the state(5) to be set 00:14:12.506 [2024-12-15 19:34:59.172046] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711530 is same with the state(5) to be set 00:14:12.506 [2024-12-15 19:34:59.172053] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711530 is same with the state(5) to be set 00:14:12.506 [2024-12-15 19:34:59.172066] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711530 is same with the state(5) to be set 00:14:12.506 [2024-12-15 19:34:59.172074] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711530 is same with the state(5) to be set 00:14:12.506 [2024-12-15 19:34:59.172082] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711530 is same with the state(5) to be set 00:14:12.506 [2024-12-15 19:34:59.172089] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711530 is same with the state(5) to be set 00:14:12.506 [2024-12-15 19:34:59.172097] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711530 is same with the state(5) to be set 00:14:12.506 [2024-12-15 19:34:59.172104] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711530 is same with the state(5) to be set 00:14:12.506 [2024-12-15 19:34:59.172111] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711530 is same with the state(5) to be set 00:14:12.506 [2024-12-15 19:34:59.172128] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711530 is same with the state(5) to be set 00:14:12.506 [2024-12-15 19:34:59.172136] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711530 is same with the state(5) to be set 00:14:12.506 [2024-12-15 19:34:59.172294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:47104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.506 [2024-12-15 19:34:59.172327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.506 [2024-12-15 19:34:59.172348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:52096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.506 [2024-12-15 19:34:59.172359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.506 [2024-12-15 19:34:59.172371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:52224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.506 [2024-12-15 19:34:59.172381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.506 [2024-12-15 19:34:59.172392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:52352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.506 [2024-12-15 19:34:59.172401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.506 [2024-12-15 19:34:59.172413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:52480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.506 [2024-12-15 19:34:59.172421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.506 [2024-12-15 19:34:59.172432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:47232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.506 [2024-12-15 19:34:59.172441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.506 [2024-12-15 19:34:59.172452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:47360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.506 [2024-12-15 19:34:59.172460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.506 [2024-12-15 19:34:59.172471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:47488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.506 [2024-12-15 19:34:59.172480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.506 [2024-12-15 19:34:59.172491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:47744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.506 [2024-12-15 19:34:59.172500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.506 [2024-12-15 19:34:59.172511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:52608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.506 [2024-12-15 19:34:59.172520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.506 [2024-12-15 19:34:59.172530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:52736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.506 [2024-12-15 19:34:59.172539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.506 [2024-12-15 19:34:59.172550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:52864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.506 [2024-12-15 19:34:59.172558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.506 [2024-12-15 19:34:59.172569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:52992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.506 [2024-12-15 19:34:59.172582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.506 [2024-12-15 19:34:59.172593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:53120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.506 [2024-12-15 19:34:59.172601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.506 [2024-12-15 19:34:59.172612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:53248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.506 [2024-12-15 19:34:59.172621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.506 [2024-12-15 19:34:59.172632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:53376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.506 [2024-12-15 19:34:59.172640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.506 [2024-12-15 19:34:59.172650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:53504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.506 [2024-12-15 19:34:59.172659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.506 [2024-12-15 19:34:59.172670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:48000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.506 [2024-12-15 19:34:59.172679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.506 [2024-12-15 19:34:59.172689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:53632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.506 [2024-12-15 19:34:59.172699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.506 [2024-12-15 19:34:59.172709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:53760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.506 [2024-12-15 19:34:59.172718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.506 [2024-12-15 19:34:59.172729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:48128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.506 [2024-12-15 19:34:59.172738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.506 [2024-12-15 19:34:59.172750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:48384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.506 [2024-12-15 19:34:59.172759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.506 [2024-12-15 19:34:59.172781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:53888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.506 [2024-12-15 19:34:59.172790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.506 [2024-12-15 19:34:59.172800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:54016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.506 [2024-12-15 19:34:59.172809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.506 [2024-12-15 19:34:59.172820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:54144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.506 [2024-12-15 19:34:59.172845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.506 [2024-12-15 19:34:59.172856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:54272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.506 [2024-12-15 19:34:59.172891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.506 [2024-12-15 19:34:59.172903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:54400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.506 [2024-12-15 19:34:59.172912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.506 [2024-12-15 19:34:59.172923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:54528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.506 [2024-12-15 19:34:59.172932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.506 [2024-12-15 19:34:59.172943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:54656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.506 [2024-12-15 19:34:59.172954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.506 [2024-12-15 19:34:59.172979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.507 [2024-12-15 19:34:59.172989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.507 [2024-12-15 19:34:59.172999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:54912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.507 [2024-12-15 19:34:59.173008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.507 [2024-12-15 19:34:59.173019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:48640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.507 [2024-12-15 19:34:59.173028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.507 [2024-12-15 19:34:59.173039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.507 [2024-12-15 19:34:59.173047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.507 [2024-12-15 19:34:59.173058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:55168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.507 [2024-12-15 19:34:59.173066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.507 [2024-12-15 19:34:59.173077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.507 [2024-12-15 19:34:59.173101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.507 [2024-12-15 19:34:59.173111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:55424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.507 [2024-12-15 19:34:59.173120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.507 [2024-12-15 19:34:59.173131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:55552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.507 [2024-12-15 19:34:59.173140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.507 [2024-12-15 19:34:59.173151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:55680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.507 [2024-12-15 19:34:59.173161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.507 [2024-12-15 19:34:59.173175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:48768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.507 [2024-12-15 19:34:59.173184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.507 [2024-12-15 19:34:59.173195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:49024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.507 [2024-12-15 19:34:59.173218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.507 [2024-12-15 19:34:59.173229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.507 [2024-12-15 19:34:59.173237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.507 [2024-12-15 19:34:59.173248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:49536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.507 [2024-12-15 19:34:59.173256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.507 [2024-12-15 19:34:59.173267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.507 [2024-12-15 19:34:59.173275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.507 [2024-12-15 19:34:59.173286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.507 [2024-12-15 19:34:59.173294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.507 [2024-12-15 19:34:59.173313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:55808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.507 [2024-12-15 19:34:59.173325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.507 [2024-12-15 19:34:59.173335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:55936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.507 [2024-12-15 19:34:59.173344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.507 [2024-12-15 19:34:59.173355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.507 [2024-12-15 19:34:59.173363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.507 [2024-12-15 19:34:59.173374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:50560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.507 [2024-12-15 19:34:59.173383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.507 [2024-12-15 19:34:59.173405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:50688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.507 [2024-12-15 19:34:59.173413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.507 [2024-12-15 19:34:59.173423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:50816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.507 [2024-12-15 19:34:59.173432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.507 [2024-12-15 19:34:59.173443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:50944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.507 [2024-12-15 19:34:59.173451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.507 [2024-12-15 19:34:59.173462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:56064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.507 [2024-12-15 19:34:59.173471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.507 [2024-12-15 19:34:59.173481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:56192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.507 [2024-12-15 19:34:59.173490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.507 [2024-12-15 19:34:59.173501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:56320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.507 [2024-12-15 19:34:59.173510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.507 [2024-12-15 19:34:59.173520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:56448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.507 [2024-12-15 19:34:59.173529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.507 [2024-12-15 19:34:59.173540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:56576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.507 [2024-12-15 19:34:59.173549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.507 [2024-12-15 19:34:59.173560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:56704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.507 [2024-12-15 19:34:59.173569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.507 [2024-12-15 19:34:59.173580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:56832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.507 [2024-12-15 19:34:59.173588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.507 [2024-12-15 19:34:59.173599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:56960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.507 [2024-12-15 19:34:59.173607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.507 [2024-12-15 19:34:59.173618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:51328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.507 [2024-12-15 19:34:59.173627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.507 [2024-12-15 19:34:59.173651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:51456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.507 [2024-12-15 19:34:59.173661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.507 [2024-12-15 19:34:59.173671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:51712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.507 [2024-12-15 19:34:59.173679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.507 [2024-12-15 19:34:59.173689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:51840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.507 [2024-12-15 19:34:59.173698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.507 [2024-12-15 19:34:59.173724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:51968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.507 [2024-12-15 19:34:59.173732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.507 [2024-12-15 19:34:59.173742] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e37c0 is same with the state(5) to be set 00:14:12.507 [2024-12-15 19:34:59.173823] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19e37c0 was disconnected and freed. reset controller. 00:14:12.507 [2024-12-15 19:34:59.175092] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:12.507 task offset: 47104 on job bdev=Nvme0n1 fails 00:14:12.507 00:14:12.507 Latency(us) 00:14:12.507 [2024-12-15T19:34:59.403Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.507 [2024-12-15T19:34:59.403Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:12.507 [2024-12-15T19:34:59.403Z] Job: Nvme0n1 ended in about 0.71 seconds with error 00:14:12.507 Verification LBA range: start 0x0 length 0x400 00:14:12.507 Nvme0n1 : 0.71 3434.86 214.68 90.47 0.00 17872.14 2770.39 24903.68 00:14:12.507 [2024-12-15T19:34:59.403Z] =================================================================================================================== 00:14:12.507 [2024-12-15T19:34:59.403Z] Total : 3434.86 214.68 90.47 0.00 17872.14 2770.39 24903.68 00:14:12.507 [2024-12-15 19:34:59.177061] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:12.507 [2024-12-15 19:34:59.177087] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4a2e0 (9): Bad file descriptor 00:14:12.507 19:34:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.507 19:34:59 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:12.508 19:34:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.508 19:34:59 -- common/autotest_common.sh@10 -- # set +x 00:14:12.508 [2024-12-15 19:34:59.186494] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:12.508 19:34:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.508 19:34:59 -- target/host_management.sh@87 -- # sleep 1 00:14:13.442 19:35:00 -- target/host_management.sh@91 -- # kill -9 82670 00:14:13.442 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (82670) - No such process 00:14:13.442 19:35:00 -- target/host_management.sh@91 -- # true 00:14:13.442 19:35:00 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:13.442 19:35:00 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:13.442 19:35:00 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:13.442 19:35:00 -- nvmf/common.sh@520 -- # config=() 00:14:13.442 19:35:00 -- nvmf/common.sh@520 -- # local subsystem config 00:14:13.442 19:35:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:13.442 19:35:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:13.442 { 00:14:13.442 "params": { 00:14:13.442 "name": "Nvme$subsystem", 00:14:13.442 "trtype": "$TEST_TRANSPORT", 00:14:13.442 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:13.442 "adrfam": "ipv4", 00:14:13.442 "trsvcid": "$NVMF_PORT", 00:14:13.442 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:13.442 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:13.442 "hdgst": ${hdgst:-false}, 00:14:13.442 "ddgst": ${ddgst:-false} 00:14:13.442 }, 00:14:13.442 "method": "bdev_nvme_attach_controller" 00:14:13.442 } 00:14:13.442 EOF 00:14:13.442 )") 00:14:13.442 19:35:00 -- nvmf/common.sh@542 -- # cat 00:14:13.442 19:35:00 -- nvmf/common.sh@544 -- # jq . 00:14:13.442 19:35:00 -- nvmf/common.sh@545 -- # IFS=, 00:14:13.442 19:35:00 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:13.442 "params": { 00:14:13.442 "name": "Nvme0", 00:14:13.442 "trtype": "tcp", 00:14:13.442 "traddr": "10.0.0.2", 00:14:13.442 "adrfam": "ipv4", 00:14:13.442 "trsvcid": "4420", 00:14:13.442 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:13.442 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:13.442 "hdgst": false, 00:14:13.442 "ddgst": false 00:14:13.442 }, 00:14:13.442 "method": "bdev_nvme_attach_controller" 00:14:13.442 }' 00:14:13.442 [2024-12-15 19:35:00.252451] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:13.442 [2024-12-15 19:35:00.252573] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82720 ] 00:14:13.700 [2024-12-15 19:35:00.389900] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.700 [2024-12-15 19:35:00.472527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.958 Running I/O for 1 seconds... 00:14:14.893 00:14:14.893 Latency(us) 00:14:14.893 [2024-12-15T19:35:01.789Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.893 [2024-12-15T19:35:01.789Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:14.893 Verification LBA range: start 0x0 length 0x400 00:14:14.893 Nvme0n1 : 1.01 3668.83 229.30 0.00 0.00 17157.06 852.71 22997.18 00:14:14.893 [2024-12-15T19:35:01.789Z] =================================================================================================================== 00:14:14.893 [2024-12-15T19:35:01.789Z] Total : 3668.83 229.30 0.00 0.00 17157.06 852.71 22997.18 00:14:15.151 19:35:01 -- target/host_management.sh@101 -- # stoptarget 00:14:15.151 19:35:01 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:15.151 19:35:01 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:14:15.151 19:35:01 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:14:15.151 19:35:01 -- target/host_management.sh@40 -- # nvmftestfini 00:14:15.151 19:35:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:15.151 19:35:01 -- nvmf/common.sh@116 -- # sync 00:14:15.151 19:35:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:15.151 19:35:02 -- nvmf/common.sh@119 -- # set +e 00:14:15.151 19:35:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:15.151 19:35:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:15.151 rmmod nvme_tcp 00:14:15.151 rmmod nvme_fabrics 00:14:15.409 rmmod nvme_keyring 00:14:15.409 19:35:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:15.409 19:35:02 -- nvmf/common.sh@123 -- # set -e 00:14:15.409 19:35:02 -- nvmf/common.sh@124 -- # return 0 00:14:15.409 19:35:02 -- nvmf/common.sh@477 -- # '[' -n 82598 ']' 00:14:15.409 19:35:02 -- nvmf/common.sh@478 -- # killprocess 82598 00:14:15.409 19:35:02 -- common/autotest_common.sh@936 -- # '[' -z 82598 ']' 00:14:15.409 19:35:02 -- common/autotest_common.sh@940 -- # kill -0 82598 00:14:15.409 19:35:02 -- common/autotest_common.sh@941 -- # uname 00:14:15.409 19:35:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:15.409 19:35:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82598 00:14:15.409 19:35:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:15.409 19:35:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:15.409 killing process with pid 82598 00:14:15.410 19:35:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82598' 00:14:15.410 19:35:02 -- common/autotest_common.sh@955 -- # kill 82598 00:14:15.410 19:35:02 -- common/autotest_common.sh@960 -- # wait 82598 00:14:15.668 [2024-12-15 19:35:02.388604] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:15.668 19:35:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:15.668 19:35:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:15.668 19:35:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:15.668 19:35:02 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:15.668 19:35:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:15.668 19:35:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.668 19:35:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:15.668 19:35:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.668 19:35:02 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:15.668 00:14:15.668 real 0m5.670s 00:14:15.668 user 0m23.614s 00:14:15.668 sys 0m1.570s 00:14:15.668 19:35:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:15.668 19:35:02 -- common/autotest_common.sh@10 -- # set +x 00:14:15.668 ************************************ 00:14:15.668 END TEST nvmf_host_management 00:14:15.668 ************************************ 00:14:15.668 19:35:02 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:14:15.668 00:14:15.668 real 0m6.270s 00:14:15.668 user 0m23.808s 00:14:15.668 sys 0m1.832s 00:14:15.668 19:35:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:15.668 19:35:02 -- common/autotest_common.sh@10 -- # set +x 00:14:15.668 ************************************ 00:14:15.668 END TEST nvmf_host_management 00:14:15.668 ************************************ 00:14:15.668 19:35:02 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:15.668 19:35:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:15.668 19:35:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:15.668 19:35:02 -- common/autotest_common.sh@10 -- # set +x 00:14:15.668 ************************************ 00:14:15.668 START TEST nvmf_lvol 00:14:15.668 ************************************ 00:14:15.668 19:35:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:15.928 * Looking for test storage... 00:14:15.928 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:15.928 19:35:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:15.928 19:35:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:15.928 19:35:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:15.928 19:35:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:15.928 19:35:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:15.928 19:35:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:15.928 19:35:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:15.928 19:35:02 -- scripts/common.sh@335 -- # IFS=.-: 00:14:15.928 19:35:02 -- scripts/common.sh@335 -- # read -ra ver1 00:14:15.928 19:35:02 -- scripts/common.sh@336 -- # IFS=.-: 00:14:15.928 19:35:02 -- scripts/common.sh@336 -- # read -ra ver2 00:14:15.928 19:35:02 -- scripts/common.sh@337 -- # local 'op=<' 00:14:15.928 19:35:02 -- scripts/common.sh@339 -- # ver1_l=2 00:14:15.928 19:35:02 -- scripts/common.sh@340 -- # ver2_l=1 00:14:15.928 19:35:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:15.928 19:35:02 -- scripts/common.sh@343 -- # case "$op" in 00:14:15.928 19:35:02 -- scripts/common.sh@344 -- # : 1 00:14:15.928 19:35:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:15.928 19:35:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:15.928 19:35:02 -- scripts/common.sh@364 -- # decimal 1 00:14:15.928 19:35:02 -- scripts/common.sh@352 -- # local d=1 00:14:15.928 19:35:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:15.928 19:35:02 -- scripts/common.sh@354 -- # echo 1 00:14:15.928 19:35:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:15.928 19:35:02 -- scripts/common.sh@365 -- # decimal 2 00:14:15.928 19:35:02 -- scripts/common.sh@352 -- # local d=2 00:14:15.928 19:35:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:15.928 19:35:02 -- scripts/common.sh@354 -- # echo 2 00:14:15.928 19:35:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:15.928 19:35:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:15.928 19:35:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:15.928 19:35:02 -- scripts/common.sh@367 -- # return 0 00:14:15.928 19:35:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:15.928 19:35:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:15.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.928 --rc genhtml_branch_coverage=1 00:14:15.928 --rc genhtml_function_coverage=1 00:14:15.928 --rc genhtml_legend=1 00:14:15.928 --rc geninfo_all_blocks=1 00:14:15.928 --rc geninfo_unexecuted_blocks=1 00:14:15.928 00:14:15.928 ' 00:14:15.928 19:35:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:15.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.928 --rc genhtml_branch_coverage=1 00:14:15.928 --rc genhtml_function_coverage=1 00:14:15.928 --rc genhtml_legend=1 00:14:15.928 --rc geninfo_all_blocks=1 00:14:15.928 --rc geninfo_unexecuted_blocks=1 00:14:15.928 00:14:15.928 ' 00:14:15.928 19:35:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:15.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.928 --rc genhtml_branch_coverage=1 00:14:15.928 --rc genhtml_function_coverage=1 00:14:15.928 --rc genhtml_legend=1 00:14:15.928 --rc geninfo_all_blocks=1 00:14:15.928 --rc geninfo_unexecuted_blocks=1 00:14:15.928 00:14:15.928 ' 00:14:15.928 19:35:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:15.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.928 --rc genhtml_branch_coverage=1 00:14:15.928 --rc genhtml_function_coverage=1 00:14:15.928 --rc genhtml_legend=1 00:14:15.928 --rc geninfo_all_blocks=1 00:14:15.928 --rc geninfo_unexecuted_blocks=1 00:14:15.928 00:14:15.928 ' 00:14:15.928 19:35:02 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:15.928 19:35:02 -- nvmf/common.sh@7 -- # uname -s 00:14:15.928 19:35:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:15.928 19:35:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:15.928 19:35:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:15.928 19:35:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:15.928 19:35:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:15.928 19:35:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:15.928 19:35:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:15.928 19:35:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:15.928 19:35:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:15.928 19:35:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:15.928 19:35:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:14:15.928 19:35:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:14:15.928 19:35:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:15.928 19:35:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:15.928 19:35:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:15.928 19:35:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:15.928 19:35:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:15.928 19:35:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:15.928 19:35:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:15.928 19:35:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.928 19:35:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.928 19:35:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.928 19:35:02 -- paths/export.sh@5 -- # export PATH 00:14:15.928 19:35:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.928 19:35:02 -- nvmf/common.sh@46 -- # : 0 00:14:15.928 19:35:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:15.928 19:35:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:15.928 19:35:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:15.928 19:35:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:15.928 19:35:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:15.928 19:35:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:15.928 19:35:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:15.928 19:35:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:15.928 19:35:02 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:15.928 19:35:02 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:15.928 19:35:02 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:15.928 19:35:02 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:15.928 19:35:02 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:15.928 19:35:02 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:15.928 19:35:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:15.928 19:35:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:15.928 19:35:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:15.928 19:35:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:15.928 19:35:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:15.928 19:35:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.928 19:35:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:15.928 19:35:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.928 19:35:02 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:15.928 19:35:02 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:15.928 19:35:02 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:15.928 19:35:02 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:15.928 19:35:02 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:15.928 19:35:02 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:15.929 19:35:02 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:15.929 19:35:02 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:15.929 19:35:02 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:15.929 19:35:02 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:15.929 19:35:02 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:15.929 19:35:02 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:15.929 19:35:02 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:15.929 19:35:02 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:15.929 19:35:02 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:15.929 19:35:02 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:15.929 19:35:02 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:15.929 19:35:02 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:15.929 19:35:02 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:15.929 19:35:02 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:15.929 Cannot find device "nvmf_tgt_br" 00:14:15.929 19:35:02 -- nvmf/common.sh@154 -- # true 00:14:15.929 19:35:02 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:15.929 Cannot find device "nvmf_tgt_br2" 00:14:15.929 19:35:02 -- nvmf/common.sh@155 -- # true 00:14:15.929 19:35:02 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:15.929 19:35:02 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:15.929 Cannot find device "nvmf_tgt_br" 00:14:15.929 19:35:02 -- nvmf/common.sh@157 -- # true 00:14:15.929 19:35:02 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:15.929 Cannot find device "nvmf_tgt_br2" 00:14:15.929 19:35:02 -- nvmf/common.sh@158 -- # true 00:14:15.929 19:35:02 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:16.188 19:35:02 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:16.188 19:35:02 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:16.188 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:16.188 19:35:02 -- nvmf/common.sh@161 -- # true 00:14:16.188 19:35:02 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:16.188 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:16.188 19:35:02 -- nvmf/common.sh@162 -- # true 00:14:16.188 19:35:02 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:16.188 19:35:02 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:16.188 19:35:02 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:16.188 19:35:02 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:16.188 19:35:02 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:16.188 19:35:02 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:16.188 19:35:02 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:16.188 19:35:02 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:16.188 19:35:02 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:16.188 19:35:02 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:16.188 19:35:02 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:16.188 19:35:02 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:16.188 19:35:02 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:16.188 19:35:02 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:16.188 19:35:02 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:16.188 19:35:02 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:16.188 19:35:02 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:16.188 19:35:02 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:16.188 19:35:02 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:16.188 19:35:02 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:16.188 19:35:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:16.188 19:35:03 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:16.188 19:35:03 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:16.188 19:35:03 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:16.188 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:16.188 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:14:16.188 00:14:16.188 --- 10.0.0.2 ping statistics --- 00:14:16.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.188 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:14:16.188 19:35:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:16.188 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:16.188 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:14:16.188 00:14:16.188 --- 10.0.0.3 ping statistics --- 00:14:16.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.188 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:14:16.188 19:35:03 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:16.188 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:16.188 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:14:16.188 00:14:16.188 --- 10.0.0.1 ping statistics --- 00:14:16.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.188 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:14:16.188 19:35:03 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:16.188 19:35:03 -- nvmf/common.sh@421 -- # return 0 00:14:16.188 19:35:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:16.188 19:35:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:16.188 19:35:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:16.188 19:35:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:16.188 19:35:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:16.188 19:35:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:16.188 19:35:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:16.188 19:35:03 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:16.188 19:35:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:16.188 19:35:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:16.188 19:35:03 -- common/autotest_common.sh@10 -- # set +x 00:14:16.447 19:35:03 -- nvmf/common.sh@469 -- # nvmfpid=82958 00:14:16.447 19:35:03 -- nvmf/common.sh@470 -- # waitforlisten 82958 00:14:16.447 19:35:03 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:16.447 19:35:03 -- common/autotest_common.sh@829 -- # '[' -z 82958 ']' 00:14:16.447 19:35:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.447 19:35:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:16.447 19:35:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.447 19:35:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:16.447 19:35:03 -- common/autotest_common.sh@10 -- # set +x 00:14:16.447 [2024-12-15 19:35:03.129363] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:16.447 [2024-12-15 19:35:03.129439] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.447 [2024-12-15 19:35:03.264860] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:16.706 [2024-12-15 19:35:03.365353] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:16.706 [2024-12-15 19:35:03.365549] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:16.706 [2024-12-15 19:35:03.365566] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:16.706 [2024-12-15 19:35:03.365577] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:16.706 [2024-12-15 19:35:03.365775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:16.706 [2024-12-15 19:35:03.365899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:16.706 [2024-12-15 19:35:03.365909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.273 19:35:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:17.273 19:35:04 -- common/autotest_common.sh@862 -- # return 0 00:14:17.273 19:35:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:17.273 19:35:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:17.273 19:35:04 -- common/autotest_common.sh@10 -- # set +x 00:14:17.273 19:35:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:17.273 19:35:04 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:17.531 [2024-12-15 19:35:04.390449] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:17.531 19:35:04 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:17.855 19:35:04 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:17.855 19:35:04 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:18.124 19:35:04 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:18.124 19:35:04 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:18.711 19:35:05 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:18.968 19:35:05 -- target/nvmf_lvol.sh@29 -- # lvs=5ec5612b-2eea-417d-9324-e440b86f4f8e 00:14:18.968 19:35:05 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5ec5612b-2eea-417d-9324-e440b86f4f8e lvol 20 00:14:18.968 19:35:05 -- target/nvmf_lvol.sh@32 -- # lvol=123489b9-6e86-4ad3-a632-bc83d8d2e238 00:14:18.968 19:35:05 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:19.227 19:35:06 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 123489b9-6e86-4ad3-a632-bc83d8d2e238 00:14:19.485 19:35:06 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:19.743 [2024-12-15 19:35:06.453219] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:19.743 19:35:06 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:20.002 19:35:06 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:20.002 19:35:06 -- target/nvmf_lvol.sh@42 -- # perf_pid=83106 00:14:20.002 19:35:06 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:20.936 19:35:07 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 123489b9-6e86-4ad3-a632-bc83d8d2e238 MY_SNAPSHOT 00:14:21.194 19:35:07 -- target/nvmf_lvol.sh@47 -- # snapshot=751fa874-f481-4ecf-b3e2-6872f63d2c2e 00:14:21.194 19:35:07 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 123489b9-6e86-4ad3-a632-bc83d8d2e238 30 00:14:21.453 19:35:08 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 751fa874-f481-4ecf-b3e2-6872f63d2c2e MY_CLONE 00:14:21.711 19:35:08 -- target/nvmf_lvol.sh@49 -- # clone=eb0182bd-432d-46e4-9c3c-1684b5d37fd1 00:14:21.711 19:35:08 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate eb0182bd-432d-46e4-9c3c-1684b5d37fd1 00:14:22.277 19:35:09 -- target/nvmf_lvol.sh@53 -- # wait 83106 00:14:30.386 Initializing NVMe Controllers 00:14:30.386 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:30.386 Controller IO queue size 128, less than required. 00:14:30.386 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:30.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:30.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:30.387 Initialization complete. Launching workers. 00:14:30.387 ======================================================== 00:14:30.387 Latency(us) 00:14:30.387 Device Information : IOPS MiB/s Average min max 00:14:30.387 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12129.80 47.38 10558.47 1532.29 53942.94 00:14:30.387 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12236.90 47.80 10459.38 2799.31 60451.11 00:14:30.387 ======================================================== 00:14:30.387 Total : 24366.70 95.18 10508.71 1532.29 60451.11 00:14:30.387 00:14:30.387 19:35:17 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:30.387 19:35:17 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 123489b9-6e86-4ad3-a632-bc83d8d2e238 00:14:30.645 19:35:17 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5ec5612b-2eea-417d-9324-e440b86f4f8e 00:14:30.903 19:35:17 -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:30.903 19:35:17 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:30.903 19:35:17 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:30.903 19:35:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:30.903 19:35:17 -- nvmf/common.sh@116 -- # sync 00:14:30.903 19:35:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:30.903 19:35:17 -- nvmf/common.sh@119 -- # set +e 00:14:30.903 19:35:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:30.903 19:35:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:30.903 rmmod nvme_tcp 00:14:30.903 rmmod nvme_fabrics 00:14:30.903 rmmod nvme_keyring 00:14:30.903 19:35:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:30.903 19:35:17 -- nvmf/common.sh@123 -- # set -e 00:14:30.903 19:35:17 -- nvmf/common.sh@124 -- # return 0 00:14:30.903 19:35:17 -- nvmf/common.sh@477 -- # '[' -n 82958 ']' 00:14:30.903 19:35:17 -- nvmf/common.sh@478 -- # killprocess 82958 00:14:30.903 19:35:17 -- common/autotest_common.sh@936 -- # '[' -z 82958 ']' 00:14:30.903 19:35:17 -- common/autotest_common.sh@940 -- # kill -0 82958 00:14:30.903 19:35:17 -- common/autotest_common.sh@941 -- # uname 00:14:30.903 19:35:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:30.903 19:35:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82958 00:14:30.903 19:35:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:30.903 19:35:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:30.903 19:35:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82958' 00:14:30.903 killing process with pid 82958 00:14:30.903 19:35:17 -- common/autotest_common.sh@955 -- # kill 82958 00:14:30.903 19:35:17 -- common/autotest_common.sh@960 -- # wait 82958 00:14:31.470 19:35:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:31.470 19:35:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:31.470 19:35:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:31.470 19:35:18 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:31.470 19:35:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:31.470 19:35:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.470 19:35:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:31.470 19:35:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.470 19:35:18 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:31.470 00:14:31.470 real 0m15.612s 00:14:31.470 user 1m4.727s 00:14:31.470 sys 0m4.103s 00:14:31.470 19:35:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:31.470 19:35:18 -- common/autotest_common.sh@10 -- # set +x 00:14:31.470 ************************************ 00:14:31.470 END TEST nvmf_lvol 00:14:31.470 ************************************ 00:14:31.470 19:35:18 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:31.470 19:35:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:31.470 19:35:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:31.470 19:35:18 -- common/autotest_common.sh@10 -- # set +x 00:14:31.470 ************************************ 00:14:31.470 START TEST nvmf_lvs_grow 00:14:31.470 ************************************ 00:14:31.470 19:35:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:31.470 * Looking for test storage... 00:14:31.470 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:31.470 19:35:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:31.470 19:35:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:31.470 19:35:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:31.729 19:35:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:31.729 19:35:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:31.729 19:35:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:31.729 19:35:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:31.729 19:35:18 -- scripts/common.sh@335 -- # IFS=.-: 00:14:31.729 19:35:18 -- scripts/common.sh@335 -- # read -ra ver1 00:14:31.729 19:35:18 -- scripts/common.sh@336 -- # IFS=.-: 00:14:31.729 19:35:18 -- scripts/common.sh@336 -- # read -ra ver2 00:14:31.729 19:35:18 -- scripts/common.sh@337 -- # local 'op=<' 00:14:31.729 19:35:18 -- scripts/common.sh@339 -- # ver1_l=2 00:14:31.729 19:35:18 -- scripts/common.sh@340 -- # ver2_l=1 00:14:31.729 19:35:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:31.729 19:35:18 -- scripts/common.sh@343 -- # case "$op" in 00:14:31.729 19:35:18 -- scripts/common.sh@344 -- # : 1 00:14:31.729 19:35:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:31.729 19:35:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:31.729 19:35:18 -- scripts/common.sh@364 -- # decimal 1 00:14:31.729 19:35:18 -- scripts/common.sh@352 -- # local d=1 00:14:31.729 19:35:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:31.729 19:35:18 -- scripts/common.sh@354 -- # echo 1 00:14:31.729 19:35:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:31.729 19:35:18 -- scripts/common.sh@365 -- # decimal 2 00:14:31.729 19:35:18 -- scripts/common.sh@352 -- # local d=2 00:14:31.729 19:35:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:31.729 19:35:18 -- scripts/common.sh@354 -- # echo 2 00:14:31.729 19:35:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:31.729 19:35:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:31.729 19:35:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:31.729 19:35:18 -- scripts/common.sh@367 -- # return 0 00:14:31.729 19:35:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:31.729 19:35:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:31.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.729 --rc genhtml_branch_coverage=1 00:14:31.729 --rc genhtml_function_coverage=1 00:14:31.729 --rc genhtml_legend=1 00:14:31.729 --rc geninfo_all_blocks=1 00:14:31.729 --rc geninfo_unexecuted_blocks=1 00:14:31.729 00:14:31.729 ' 00:14:31.729 19:35:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:31.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.729 --rc genhtml_branch_coverage=1 00:14:31.729 --rc genhtml_function_coverage=1 00:14:31.729 --rc genhtml_legend=1 00:14:31.729 --rc geninfo_all_blocks=1 00:14:31.729 --rc geninfo_unexecuted_blocks=1 00:14:31.729 00:14:31.729 ' 00:14:31.729 19:35:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:31.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.729 --rc genhtml_branch_coverage=1 00:14:31.729 --rc genhtml_function_coverage=1 00:14:31.729 --rc genhtml_legend=1 00:14:31.729 --rc geninfo_all_blocks=1 00:14:31.729 --rc geninfo_unexecuted_blocks=1 00:14:31.729 00:14:31.729 ' 00:14:31.729 19:35:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:31.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.729 --rc genhtml_branch_coverage=1 00:14:31.729 --rc genhtml_function_coverage=1 00:14:31.729 --rc genhtml_legend=1 00:14:31.729 --rc geninfo_all_blocks=1 00:14:31.729 --rc geninfo_unexecuted_blocks=1 00:14:31.729 00:14:31.729 ' 00:14:31.729 19:35:18 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:31.729 19:35:18 -- nvmf/common.sh@7 -- # uname -s 00:14:31.729 19:35:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:31.729 19:35:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:31.729 19:35:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:31.729 19:35:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:31.729 19:35:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:31.729 19:35:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:31.729 19:35:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:31.729 19:35:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:31.729 19:35:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:31.729 19:35:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:31.730 19:35:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:14:31.730 19:35:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:14:31.730 19:35:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:31.730 19:35:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:31.730 19:35:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:31.730 19:35:18 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:31.730 19:35:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:31.730 19:35:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:31.730 19:35:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:31.730 19:35:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.730 19:35:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.730 19:35:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.730 19:35:18 -- paths/export.sh@5 -- # export PATH 00:14:31.730 19:35:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.730 19:35:18 -- nvmf/common.sh@46 -- # : 0 00:14:31.730 19:35:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:31.730 19:35:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:31.730 19:35:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:31.730 19:35:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:31.730 19:35:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:31.730 19:35:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:31.730 19:35:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:31.730 19:35:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:31.730 19:35:18 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:31.730 19:35:18 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:31.730 19:35:18 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:14:31.730 19:35:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:31.730 19:35:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:31.730 19:35:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:31.730 19:35:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:31.730 19:35:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:31.730 19:35:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.730 19:35:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:31.730 19:35:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.730 19:35:18 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:31.730 19:35:18 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:31.730 19:35:18 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:31.730 19:35:18 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:31.730 19:35:18 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:31.730 19:35:18 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:31.730 19:35:18 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:31.730 19:35:18 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:31.730 19:35:18 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:31.730 19:35:18 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:31.730 19:35:18 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:31.730 19:35:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:31.730 19:35:18 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:31.730 19:35:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:31.730 19:35:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:31.730 19:35:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:31.730 19:35:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:31.730 19:35:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:31.730 19:35:18 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:31.730 19:35:18 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:31.730 Cannot find device "nvmf_tgt_br" 00:14:31.730 19:35:18 -- nvmf/common.sh@154 -- # true 00:14:31.730 19:35:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:31.730 Cannot find device "nvmf_tgt_br2" 00:14:31.730 19:35:18 -- nvmf/common.sh@155 -- # true 00:14:31.730 19:35:18 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:31.730 19:35:18 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:31.730 Cannot find device "nvmf_tgt_br" 00:14:31.730 19:35:18 -- nvmf/common.sh@157 -- # true 00:14:31.730 19:35:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:31.730 Cannot find device "nvmf_tgt_br2" 00:14:31.730 19:35:18 -- nvmf/common.sh@158 -- # true 00:14:31.730 19:35:18 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:31.730 19:35:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:31.730 19:35:18 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:31.730 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:31.730 19:35:18 -- nvmf/common.sh@161 -- # true 00:14:31.730 19:35:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:31.730 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:31.730 19:35:18 -- nvmf/common.sh@162 -- # true 00:14:31.730 19:35:18 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:31.730 19:35:18 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:31.730 19:35:18 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:31.730 19:35:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:31.730 19:35:18 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:31.730 19:35:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:31.988 19:35:18 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:31.988 19:35:18 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:31.988 19:35:18 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:31.988 19:35:18 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:31.988 19:35:18 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:31.988 19:35:18 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:31.988 19:35:18 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:31.988 19:35:18 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:31.988 19:35:18 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:31.988 19:35:18 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:31.988 19:35:18 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:31.988 19:35:18 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:31.988 19:35:18 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:31.988 19:35:18 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:31.988 19:35:18 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:31.988 19:35:18 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:31.988 19:35:18 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:31.988 19:35:18 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:31.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:31.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:14:31.989 00:14:31.989 --- 10.0.0.2 ping statistics --- 00:14:31.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.989 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:14:31.989 19:35:18 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:31.989 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:31.989 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:14:31.989 00:14:31.989 --- 10.0.0.3 ping statistics --- 00:14:31.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.989 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:14:31.989 19:35:18 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:31.989 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:31.989 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:31.989 00:14:31.989 --- 10.0.0.1 ping statistics --- 00:14:31.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.989 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:31.989 19:35:18 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:31.989 19:35:18 -- nvmf/common.sh@421 -- # return 0 00:14:31.989 19:35:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:31.989 19:35:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:31.989 19:35:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:31.989 19:35:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:31.989 19:35:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:31.989 19:35:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:31.989 19:35:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:31.989 19:35:18 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:14:31.989 19:35:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:31.989 19:35:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:31.989 19:35:18 -- common/autotest_common.sh@10 -- # set +x 00:14:31.989 19:35:18 -- nvmf/common.sh@469 -- # nvmfpid=83474 00:14:31.989 19:35:18 -- nvmf/common.sh@470 -- # waitforlisten 83474 00:14:31.989 19:35:18 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:31.989 19:35:18 -- common/autotest_common.sh@829 -- # '[' -z 83474 ']' 00:14:31.989 19:35:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.989 19:35:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:31.989 19:35:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.989 19:35:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:31.989 19:35:18 -- common/autotest_common.sh@10 -- # set +x 00:14:31.989 [2024-12-15 19:35:18.822209] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:31.989 [2024-12-15 19:35:18.822295] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:32.247 [2024-12-15 19:35:18.950045] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.247 [2024-12-15 19:35:19.028662] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:32.247 [2024-12-15 19:35:19.029147] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:32.247 [2024-12-15 19:35:19.029288] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:32.247 [2024-12-15 19:35:19.029305] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:32.247 [2024-12-15 19:35:19.029335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.183 19:35:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:33.183 19:35:19 -- common/autotest_common.sh@862 -- # return 0 00:14:33.183 19:35:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:33.183 19:35:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:33.183 19:35:19 -- common/autotest_common.sh@10 -- # set +x 00:14:33.183 19:35:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:33.183 19:35:19 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:33.183 [2024-12-15 19:35:20.021047] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:33.183 19:35:20 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:14:33.183 19:35:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:33.183 19:35:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:33.183 19:35:20 -- common/autotest_common.sh@10 -- # set +x 00:14:33.183 ************************************ 00:14:33.183 START TEST lvs_grow_clean 00:14:33.183 ************************************ 00:14:33.183 19:35:20 -- common/autotest_common.sh@1114 -- # lvs_grow 00:14:33.183 19:35:20 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:33.183 19:35:20 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:33.183 19:35:20 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:33.183 19:35:20 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:33.183 19:35:20 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:33.183 19:35:20 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:33.183 19:35:20 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:33.183 19:35:20 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:33.183 19:35:20 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:33.749 19:35:20 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:33.749 19:35:20 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:34.008 19:35:20 -- target/nvmf_lvs_grow.sh@28 -- # lvs=6b817aad-db07-4ba4-b930-c1df65128c71 00:14:34.008 19:35:20 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b817aad-db07-4ba4-b930-c1df65128c71 00:14:34.008 19:35:20 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:34.008 19:35:20 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:34.008 19:35:20 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:34.008 19:35:20 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6b817aad-db07-4ba4-b930-c1df65128c71 lvol 150 00:14:34.266 19:35:21 -- target/nvmf_lvs_grow.sh@33 -- # lvol=fd131304-2d71-4ca6-9e6b-a75ba9cb609c 00:14:34.266 19:35:21 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:34.266 19:35:21 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:34.524 [2024-12-15 19:35:21.414787] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:34.524 [2024-12-15 19:35:21.414885] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:34.782 true 00:14:34.782 19:35:21 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:34.782 19:35:21 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b817aad-db07-4ba4-b930-c1df65128c71 00:14:35.041 19:35:21 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:35.041 19:35:21 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:35.041 19:35:21 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fd131304-2d71-4ca6-9e6b-a75ba9cb609c 00:14:35.300 19:35:22 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:35.559 [2024-12-15 19:35:22.339366] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:35.559 19:35:22 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:35.817 19:35:22 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=83639 00:14:35.817 19:35:22 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:35.817 19:35:22 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:35.817 19:35:22 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 83639 /var/tmp/bdevperf.sock 00:14:35.817 19:35:22 -- common/autotest_common.sh@829 -- # '[' -z 83639 ']' 00:14:35.817 19:35:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:35.817 19:35:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:35.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:35.817 19:35:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:35.817 19:35:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:35.817 19:35:22 -- common/autotest_common.sh@10 -- # set +x 00:14:35.817 [2024-12-15 19:35:22.606072] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:35.817 [2024-12-15 19:35:22.606178] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83639 ] 00:14:36.076 [2024-12-15 19:35:22.741207] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.076 [2024-12-15 19:35:22.837885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:37.013 19:35:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:37.013 19:35:23 -- common/autotest_common.sh@862 -- # return 0 00:14:37.013 19:35:23 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:37.271 Nvme0n1 00:14:37.271 19:35:23 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:37.271 [ 00:14:37.271 { 00:14:37.271 "aliases": [ 00:14:37.271 "fd131304-2d71-4ca6-9e6b-a75ba9cb609c" 00:14:37.271 ], 00:14:37.271 "assigned_rate_limits": { 00:14:37.271 "r_mbytes_per_sec": 0, 00:14:37.271 "rw_ios_per_sec": 0, 00:14:37.271 "rw_mbytes_per_sec": 0, 00:14:37.271 "w_mbytes_per_sec": 0 00:14:37.271 }, 00:14:37.271 "block_size": 4096, 00:14:37.271 "claimed": false, 00:14:37.271 "driver_specific": { 00:14:37.271 "mp_policy": "active_passive", 00:14:37.271 "nvme": [ 00:14:37.271 { 00:14:37.271 "ctrlr_data": { 00:14:37.271 "ana_reporting": false, 00:14:37.271 "cntlid": 1, 00:14:37.271 "firmware_revision": "24.01.1", 00:14:37.271 "model_number": "SPDK bdev Controller", 00:14:37.271 "multi_ctrlr": true, 00:14:37.271 "oacs": { 00:14:37.271 "firmware": 0, 00:14:37.271 "format": 0, 00:14:37.271 "ns_manage": 0, 00:14:37.271 "security": 0 00:14:37.271 }, 00:14:37.271 "serial_number": "SPDK0", 00:14:37.271 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:37.271 "vendor_id": "0x8086" 00:14:37.271 }, 00:14:37.271 "ns_data": { 00:14:37.271 "can_share": true, 00:14:37.271 "id": 1 00:14:37.271 }, 00:14:37.271 "trid": { 00:14:37.271 "adrfam": "IPv4", 00:14:37.271 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:37.271 "traddr": "10.0.0.2", 00:14:37.271 "trsvcid": "4420", 00:14:37.271 "trtype": "TCP" 00:14:37.271 }, 00:14:37.271 "vs": { 00:14:37.271 "nvme_version": "1.3" 00:14:37.271 } 00:14:37.271 } 00:14:37.271 ] 00:14:37.271 }, 00:14:37.271 "name": "Nvme0n1", 00:14:37.271 "num_blocks": 38912, 00:14:37.271 "product_name": "NVMe disk", 00:14:37.271 "supported_io_types": { 00:14:37.271 "abort": true, 00:14:37.271 "compare": true, 00:14:37.271 "compare_and_write": true, 00:14:37.272 "flush": true, 00:14:37.272 "nvme_admin": true, 00:14:37.272 "nvme_io": true, 00:14:37.272 "read": true, 00:14:37.272 "reset": true, 00:14:37.272 "unmap": true, 00:14:37.272 "write": true, 00:14:37.272 "write_zeroes": true 00:14:37.272 }, 00:14:37.272 "uuid": "fd131304-2d71-4ca6-9e6b-a75ba9cb609c", 00:14:37.272 "zoned": false 00:14:37.272 } 00:14:37.272 ] 00:14:37.272 19:35:24 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=83688 00:14:37.272 19:35:24 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:37.272 19:35:24 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:37.530 Running I/O for 10 seconds... 00:14:38.466 Latency(us) 00:14:38.466 [2024-12-15T19:35:25.362Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.466 [2024-12-15T19:35:25.362Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:38.466 Nvme0n1 : 1.00 8371.00 32.70 0.00 0.00 0.00 0.00 0.00 00:14:38.466 [2024-12-15T19:35:25.362Z] =================================================================================================================== 00:14:38.466 [2024-12-15T19:35:25.362Z] Total : 8371.00 32.70 0.00 0.00 0.00 0.00 0.00 00:14:38.466 00:14:39.401 19:35:26 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6b817aad-db07-4ba4-b930-c1df65128c71 00:14:39.401 [2024-12-15T19:35:26.297Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:39.401 Nvme0n1 : 2.00 8400.00 32.81 0.00 0.00 0.00 0.00 0.00 00:14:39.401 [2024-12-15T19:35:26.297Z] =================================================================================================================== 00:14:39.401 [2024-12-15T19:35:26.297Z] Total : 8400.00 32.81 0.00 0.00 0.00 0.00 0.00 00:14:39.401 00:14:39.660 true 00:14:39.660 19:35:26 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b817aad-db07-4ba4-b930-c1df65128c71 00:14:39.660 19:35:26 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:39.919 19:35:26 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:39.919 19:35:26 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:39.919 19:35:26 -- target/nvmf_lvs_grow.sh@65 -- # wait 83688 00:14:40.487 [2024-12-15T19:35:27.383Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:40.487 Nvme0n1 : 3.00 8345.00 32.60 0.00 0.00 0.00 0.00 0.00 00:14:40.487 [2024-12-15T19:35:27.383Z] =================================================================================================================== 00:14:40.487 [2024-12-15T19:35:27.383Z] Total : 8345.00 32.60 0.00 0.00 0.00 0.00 0.00 00:14:40.487 00:14:41.424 [2024-12-15T19:35:28.320Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:41.424 Nvme0n1 : 4.00 8332.50 32.55 0.00 0.00 0.00 0.00 0.00 00:14:41.424 [2024-12-15T19:35:28.320Z] =================================================================================================================== 00:14:41.424 [2024-12-15T19:35:28.320Z] Total : 8332.50 32.55 0.00 0.00 0.00 0.00 0.00 00:14:41.424 00:14:42.361 [2024-12-15T19:35:29.257Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:42.361 Nvme0n1 : 5.00 8322.00 32.51 0.00 0.00 0.00 0.00 0.00 00:14:42.361 [2024-12-15T19:35:29.257Z] =================================================================================================================== 00:14:42.361 [2024-12-15T19:35:29.257Z] Total : 8322.00 32.51 0.00 0.00 0.00 0.00 0.00 00:14:42.361 00:14:43.737 [2024-12-15T19:35:30.633Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:43.737 Nvme0n1 : 6.00 8316.33 32.49 0.00 0.00 0.00 0.00 0.00 00:14:43.737 [2024-12-15T19:35:30.633Z] =================================================================================================================== 00:14:43.737 [2024-12-15T19:35:30.633Z] Total : 8316.33 32.49 0.00 0.00 0.00 0.00 0.00 00:14:43.737 00:14:44.673 [2024-12-15T19:35:31.569Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:44.673 Nvme0n1 : 7.00 8282.43 32.35 0.00 0.00 0.00 0.00 0.00 00:14:44.673 [2024-12-15T19:35:31.569Z] =================================================================================================================== 00:14:44.673 [2024-12-15T19:35:31.569Z] Total : 8282.43 32.35 0.00 0.00 0.00 0.00 0.00 00:14:44.673 00:14:45.609 [2024-12-15T19:35:32.505Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:45.609 Nvme0n1 : 8.00 8248.38 32.22 0.00 0.00 0.00 0.00 0.00 00:14:45.609 [2024-12-15T19:35:32.505Z] =================================================================================================================== 00:14:45.609 [2024-12-15T19:35:32.505Z] Total : 8248.38 32.22 0.00 0.00 0.00 0.00 0.00 00:14:45.609 00:14:46.545 [2024-12-15T19:35:33.441Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:46.545 Nvme0n1 : 9.00 8259.78 32.26 0.00 0.00 0.00 0.00 0.00 00:14:46.545 [2024-12-15T19:35:33.441Z] =================================================================================================================== 00:14:46.545 [2024-12-15T19:35:33.441Z] Total : 8259.78 32.26 0.00 0.00 0.00 0.00 0.00 00:14:46.545 00:14:47.482 [2024-12-15T19:35:34.378Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:47.482 Nvme0n1 : 10.00 8248.90 32.22 0.00 0.00 0.00 0.00 0.00 00:14:47.482 [2024-12-15T19:35:34.378Z] =================================================================================================================== 00:14:47.482 [2024-12-15T19:35:34.378Z] Total : 8248.90 32.22 0.00 0.00 0.00 0.00 0.00 00:14:47.482 00:14:47.741 00:14:47.741 Latency(us) 00:14:47.741 [2024-12-15T19:35:34.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:47.741 [2024-12-15T19:35:34.637Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:47.741 Nvme0n1 : 10.17 8126.14 31.74 0.00 0.00 15742.15 7506.85 187790.43 00:14:47.741 [2024-12-15T19:35:34.637Z] =================================================================================================================== 00:14:47.741 [2024-12-15T19:35:34.637Z] Total : 8126.14 31.74 0.00 0.00 15742.15 7506.85 187790.43 00:14:47.741 0 00:14:47.741 19:35:34 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 83639 00:14:47.741 19:35:34 -- common/autotest_common.sh@936 -- # '[' -z 83639 ']' 00:14:47.741 19:35:34 -- common/autotest_common.sh@940 -- # kill -0 83639 00:14:47.741 19:35:34 -- common/autotest_common.sh@941 -- # uname 00:14:47.741 19:35:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:47.741 19:35:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83639 00:14:47.741 19:35:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:47.741 19:35:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:47.741 killing process with pid 83639 00:14:47.741 19:35:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83639' 00:14:47.741 Received shutdown signal, test time was about 10.000000 seconds 00:14:47.741 00:14:47.741 Latency(us) 00:14:47.741 [2024-12-15T19:35:34.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:47.741 [2024-12-15T19:35:34.638Z] =================================================================================================================== 00:14:47.742 [2024-12-15T19:35:34.638Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:47.742 19:35:34 -- common/autotest_common.sh@955 -- # kill 83639 00:14:47.742 19:35:34 -- common/autotest_common.sh@960 -- # wait 83639 00:14:48.001 19:35:34 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:48.260 19:35:35 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:48.260 19:35:35 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b817aad-db07-4ba4-b930-c1df65128c71 00:14:48.518 19:35:35 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:48.518 19:35:35 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:14:48.518 19:35:35 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:48.776 [2024-12-15 19:35:35.514579] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:48.776 19:35:35 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b817aad-db07-4ba4-b930-c1df65128c71 00:14:48.776 19:35:35 -- common/autotest_common.sh@650 -- # local es=0 00:14:48.776 19:35:35 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b817aad-db07-4ba4-b930-c1df65128c71 00:14:48.776 19:35:35 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:48.776 19:35:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:48.776 19:35:35 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:48.776 19:35:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:48.776 19:35:35 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:48.776 19:35:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:48.776 19:35:35 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:48.776 19:35:35 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:48.776 19:35:35 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b817aad-db07-4ba4-b930-c1df65128c71 00:14:49.035 2024/12/15 19:35:35 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:6b817aad-db07-4ba4-b930-c1df65128c71], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:49.035 request: 00:14:49.035 { 00:14:49.035 "method": "bdev_lvol_get_lvstores", 00:14:49.035 "params": { 00:14:49.035 "uuid": "6b817aad-db07-4ba4-b930-c1df65128c71" 00:14:49.035 } 00:14:49.035 } 00:14:49.035 Got JSON-RPC error response 00:14:49.035 GoRPCClient: error on JSON-RPC call 00:14:49.035 19:35:35 -- common/autotest_common.sh@653 -- # es=1 00:14:49.035 19:35:35 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:49.035 19:35:35 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:49.035 19:35:35 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:49.035 19:35:35 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:49.294 aio_bdev 00:14:49.294 19:35:36 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev fd131304-2d71-4ca6-9e6b-a75ba9cb609c 00:14:49.294 19:35:36 -- common/autotest_common.sh@897 -- # local bdev_name=fd131304-2d71-4ca6-9e6b-a75ba9cb609c 00:14:49.294 19:35:36 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:49.294 19:35:36 -- common/autotest_common.sh@899 -- # local i 00:14:49.294 19:35:36 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:49.294 19:35:36 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:49.294 19:35:36 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:49.553 19:35:36 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fd131304-2d71-4ca6-9e6b-a75ba9cb609c -t 2000 00:14:49.812 [ 00:14:49.812 { 00:14:49.812 "aliases": [ 00:14:49.812 "lvs/lvol" 00:14:49.812 ], 00:14:49.812 "assigned_rate_limits": { 00:14:49.812 "r_mbytes_per_sec": 0, 00:14:49.812 "rw_ios_per_sec": 0, 00:14:49.812 "rw_mbytes_per_sec": 0, 00:14:49.812 "w_mbytes_per_sec": 0 00:14:49.812 }, 00:14:49.812 "block_size": 4096, 00:14:49.812 "claimed": false, 00:14:49.812 "driver_specific": { 00:14:49.812 "lvol": { 00:14:49.812 "base_bdev": "aio_bdev", 00:14:49.812 "clone": false, 00:14:49.812 "esnap_clone": false, 00:14:49.812 "lvol_store_uuid": "6b817aad-db07-4ba4-b930-c1df65128c71", 00:14:49.812 "snapshot": false, 00:14:49.812 "thin_provision": false 00:14:49.813 } 00:14:49.813 }, 00:14:49.813 "name": "fd131304-2d71-4ca6-9e6b-a75ba9cb609c", 00:14:49.813 "num_blocks": 38912, 00:14:49.813 "product_name": "Logical Volume", 00:14:49.813 "supported_io_types": { 00:14:49.813 "abort": false, 00:14:49.813 "compare": false, 00:14:49.813 "compare_and_write": false, 00:14:49.813 "flush": false, 00:14:49.813 "nvme_admin": false, 00:14:49.813 "nvme_io": false, 00:14:49.813 "read": true, 00:14:49.813 "reset": true, 00:14:49.813 "unmap": true, 00:14:49.813 "write": true, 00:14:49.813 "write_zeroes": true 00:14:49.813 }, 00:14:49.813 "uuid": "fd131304-2d71-4ca6-9e6b-a75ba9cb609c", 00:14:49.813 "zoned": false 00:14:49.813 } 00:14:49.813 ] 00:14:49.813 19:35:36 -- common/autotest_common.sh@905 -- # return 0 00:14:49.813 19:35:36 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b817aad-db07-4ba4-b930-c1df65128c71 00:14:49.813 19:35:36 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:50.072 19:35:36 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:50.072 19:35:36 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b817aad-db07-4ba4-b930-c1df65128c71 00:14:50.072 19:35:36 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:50.331 19:35:37 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:50.331 19:35:37 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete fd131304-2d71-4ca6-9e6b-a75ba9cb609c 00:14:50.602 19:35:37 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6b817aad-db07-4ba4-b930-c1df65128c71 00:14:50.883 19:35:37 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:50.883 19:35:37 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:51.463 00:14:51.463 real 0m18.114s 00:14:51.463 user 0m17.338s 00:14:51.463 sys 0m2.254s 00:14:51.463 19:35:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:51.463 19:35:38 -- common/autotest_common.sh@10 -- # set +x 00:14:51.463 ************************************ 00:14:51.463 END TEST lvs_grow_clean 00:14:51.463 ************************************ 00:14:51.463 19:35:38 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:51.463 19:35:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:51.463 19:35:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:51.463 19:35:38 -- common/autotest_common.sh@10 -- # set +x 00:14:51.463 ************************************ 00:14:51.463 START TEST lvs_grow_dirty 00:14:51.463 ************************************ 00:14:51.463 19:35:38 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:14:51.463 19:35:38 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:51.463 19:35:38 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:51.463 19:35:38 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:51.463 19:35:38 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:51.463 19:35:38 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:51.463 19:35:38 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:51.463 19:35:38 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:51.463 19:35:38 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:51.463 19:35:38 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:51.722 19:35:38 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:51.722 19:35:38 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:51.981 19:35:38 -- target/nvmf_lvs_grow.sh@28 -- # lvs=38bc6f1d-8b45-475b-bc68-5610bbdc1ca9 00:14:51.981 19:35:38 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 38bc6f1d-8b45-475b-bc68-5610bbdc1ca9 00:14:51.981 19:35:38 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:52.240 19:35:39 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:52.240 19:35:39 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:52.240 19:35:39 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 38bc6f1d-8b45-475b-bc68-5610bbdc1ca9 lvol 150 00:14:52.499 19:35:39 -- target/nvmf_lvs_grow.sh@33 -- # lvol=863f9660-df48-4687-a0e3-df4fe5072f85 00:14:52.499 19:35:39 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:52.499 19:35:39 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:52.758 [2024-12-15 19:35:39.515866] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:52.758 [2024-12-15 19:35:39.515930] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:52.758 true 00:14:52.758 19:35:39 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 38bc6f1d-8b45-475b-bc68-5610bbdc1ca9 00:14:52.758 19:35:39 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:53.016 19:35:39 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:53.016 19:35:39 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:53.275 19:35:40 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 863f9660-df48-4687-a0e3-df4fe5072f85 00:14:53.533 19:35:40 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:53.792 19:35:40 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:54.051 19:35:40 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=84073 00:14:54.051 19:35:40 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:54.051 19:35:40 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:54.051 19:35:40 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 84073 /var/tmp/bdevperf.sock 00:14:54.051 19:35:40 -- common/autotest_common.sh@829 -- # '[' -z 84073 ']' 00:14:54.051 19:35:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:54.051 19:35:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:54.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:54.051 19:35:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:54.051 19:35:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:54.051 19:35:40 -- common/autotest_common.sh@10 -- # set +x 00:14:54.051 [2024-12-15 19:35:40.862407] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:54.051 [2024-12-15 19:35:40.862521] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84073 ] 00:14:54.310 [2024-12-15 19:35:41.003139] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.310 [2024-12-15 19:35:41.090590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:55.247 19:35:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:55.247 19:35:41 -- common/autotest_common.sh@862 -- # return 0 00:14:55.247 19:35:41 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:55.247 Nvme0n1 00:14:55.247 19:35:42 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:55.506 [ 00:14:55.506 { 00:14:55.506 "aliases": [ 00:14:55.506 "863f9660-df48-4687-a0e3-df4fe5072f85" 00:14:55.506 ], 00:14:55.506 "assigned_rate_limits": { 00:14:55.506 "r_mbytes_per_sec": 0, 00:14:55.506 "rw_ios_per_sec": 0, 00:14:55.506 "rw_mbytes_per_sec": 0, 00:14:55.506 "w_mbytes_per_sec": 0 00:14:55.506 }, 00:14:55.506 "block_size": 4096, 00:14:55.506 "claimed": false, 00:14:55.506 "driver_specific": { 00:14:55.506 "mp_policy": "active_passive", 00:14:55.506 "nvme": [ 00:14:55.506 { 00:14:55.506 "ctrlr_data": { 00:14:55.506 "ana_reporting": false, 00:14:55.506 "cntlid": 1, 00:14:55.506 "firmware_revision": "24.01.1", 00:14:55.506 "model_number": "SPDK bdev Controller", 00:14:55.506 "multi_ctrlr": true, 00:14:55.506 "oacs": { 00:14:55.506 "firmware": 0, 00:14:55.506 "format": 0, 00:14:55.506 "ns_manage": 0, 00:14:55.506 "security": 0 00:14:55.506 }, 00:14:55.506 "serial_number": "SPDK0", 00:14:55.506 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:55.506 "vendor_id": "0x8086" 00:14:55.506 }, 00:14:55.506 "ns_data": { 00:14:55.506 "can_share": true, 00:14:55.506 "id": 1 00:14:55.506 }, 00:14:55.506 "trid": { 00:14:55.506 "adrfam": "IPv4", 00:14:55.506 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:55.506 "traddr": "10.0.0.2", 00:14:55.506 "trsvcid": "4420", 00:14:55.506 "trtype": "TCP" 00:14:55.506 }, 00:14:55.506 "vs": { 00:14:55.506 "nvme_version": "1.3" 00:14:55.506 } 00:14:55.506 } 00:14:55.506 ] 00:14:55.506 }, 00:14:55.506 "name": "Nvme0n1", 00:14:55.506 "num_blocks": 38912, 00:14:55.506 "product_name": "NVMe disk", 00:14:55.507 "supported_io_types": { 00:14:55.507 "abort": true, 00:14:55.507 "compare": true, 00:14:55.507 "compare_and_write": true, 00:14:55.507 "flush": true, 00:14:55.507 "nvme_admin": true, 00:14:55.507 "nvme_io": true, 00:14:55.507 "read": true, 00:14:55.507 "reset": true, 00:14:55.507 "unmap": true, 00:14:55.507 "write": true, 00:14:55.507 "write_zeroes": true 00:14:55.507 }, 00:14:55.507 "uuid": "863f9660-df48-4687-a0e3-df4fe5072f85", 00:14:55.507 "zoned": false 00:14:55.507 } 00:14:55.507 ] 00:14:55.507 19:35:42 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=84115 00:14:55.507 19:35:42 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:55.507 19:35:42 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:55.765 Running I/O for 10 seconds... 00:14:56.701 Latency(us) 00:14:56.701 [2024-12-15T19:35:43.597Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.701 [2024-12-15T19:35:43.597Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:56.701 Nvme0n1 : 1.00 8734.00 34.12 0.00 0.00 0.00 0.00 0.00 00:14:56.701 [2024-12-15T19:35:43.597Z] =================================================================================================================== 00:14:56.701 [2024-12-15T19:35:43.597Z] Total : 8734.00 34.12 0.00 0.00 0.00 0.00 0.00 00:14:56.701 00:14:57.637 19:35:44 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 38bc6f1d-8b45-475b-bc68-5610bbdc1ca9 00:14:57.637 [2024-12-15T19:35:44.533Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:57.637 Nvme0n1 : 2.00 8565.50 33.46 0.00 0.00 0.00 0.00 0.00 00:14:57.637 [2024-12-15T19:35:44.533Z] =================================================================================================================== 00:14:57.637 [2024-12-15T19:35:44.533Z] Total : 8565.50 33.46 0.00 0.00 0.00 0.00 0.00 00:14:57.637 00:14:57.896 true 00:14:57.896 19:35:44 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 38bc6f1d-8b45-475b-bc68-5610bbdc1ca9 00:14:57.896 19:35:44 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:58.154 19:35:44 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:58.154 19:35:44 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:58.154 19:35:44 -- target/nvmf_lvs_grow.sh@65 -- # wait 84115 00:14:58.721 [2024-12-15T19:35:45.617Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:58.721 Nvme0n1 : 3.00 8309.00 32.46 0.00 0.00 0.00 0.00 0.00 00:14:58.721 [2024-12-15T19:35:45.617Z] =================================================================================================================== 00:14:58.721 [2024-12-15T19:35:45.617Z] Total : 8309.00 32.46 0.00 0.00 0.00 0.00 0.00 00:14:58.721 00:14:59.656 [2024-12-15T19:35:46.552Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:59.656 Nvme0n1 : 4.00 8298.00 32.41 0.00 0.00 0.00 0.00 0.00 00:14:59.656 [2024-12-15T19:35:46.552Z] =================================================================================================================== 00:14:59.656 [2024-12-15T19:35:46.552Z] Total : 8298.00 32.41 0.00 0.00 0.00 0.00 0.00 00:14:59.656 00:15:00.590 [2024-12-15T19:35:47.486Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:00.590 Nvme0n1 : 5.00 8305.60 32.44 0.00 0.00 0.00 0.00 0.00 00:15:00.590 [2024-12-15T19:35:47.486Z] =================================================================================================================== 00:15:00.590 [2024-12-15T19:35:47.486Z] Total : 8305.60 32.44 0.00 0.00 0.00 0.00 0.00 00:15:00.590 00:15:01.966 [2024-12-15T19:35:48.862Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:01.966 Nvme0n1 : 6.00 8317.67 32.49 0.00 0.00 0.00 0.00 0.00 00:15:01.966 [2024-12-15T19:35:48.862Z] =================================================================================================================== 00:15:01.966 [2024-12-15T19:35:48.862Z] Total : 8317.67 32.49 0.00 0.00 0.00 0.00 0.00 00:15:01.966 00:15:02.902 [2024-12-15T19:35:49.798Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:02.902 Nvme0n1 : 7.00 8182.43 31.96 0.00 0.00 0.00 0.00 0.00 00:15:02.902 [2024-12-15T19:35:49.798Z] =================================================================================================================== 00:15:02.902 [2024-12-15T19:35:49.798Z] Total : 8182.43 31.96 0.00 0.00 0.00 0.00 0.00 00:15:02.902 00:15:03.838 [2024-12-15T19:35:50.734Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:03.838 Nvme0n1 : 8.00 8213.12 32.08 0.00 0.00 0.00 0.00 0.00 00:15:03.838 [2024-12-15T19:35:50.734Z] =================================================================================================================== 00:15:03.838 [2024-12-15T19:35:50.734Z] Total : 8213.12 32.08 0.00 0.00 0.00 0.00 0.00 00:15:03.838 00:15:04.773 [2024-12-15T19:35:51.669Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:04.773 Nvme0n1 : 9.00 8239.89 32.19 0.00 0.00 0.00 0.00 0.00 00:15:04.773 [2024-12-15T19:35:51.669Z] =================================================================================================================== 00:15:04.773 [2024-12-15T19:35:51.669Z] Total : 8239.89 32.19 0.00 0.00 0.00 0.00 0.00 00:15:04.773 00:15:05.710 [2024-12-15T19:35:52.606Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:05.710 Nvme0n1 : 10.00 8257.90 32.26 0.00 0.00 0.00 0.00 0.00 00:15:05.710 [2024-12-15T19:35:52.606Z] =================================================================================================================== 00:15:05.710 [2024-12-15T19:35:52.606Z] Total : 8257.90 32.26 0.00 0.00 0.00 0.00 0.00 00:15:05.710 00:15:05.710 00:15:05.710 Latency(us) 00:15:05.710 [2024-12-15T19:35:52.606Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.710 [2024-12-15T19:35:52.606Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:05.710 Nvme0n1 : 10.01 8263.74 32.28 0.00 0.00 15480.95 7119.59 164912.41 00:15:05.710 [2024-12-15T19:35:52.606Z] =================================================================================================================== 00:15:05.710 [2024-12-15T19:35:52.606Z] Total : 8263.74 32.28 0.00 0.00 15480.95 7119.59 164912.41 00:15:05.710 0 00:15:05.710 19:35:52 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 84073 00:15:05.710 19:35:52 -- common/autotest_common.sh@936 -- # '[' -z 84073 ']' 00:15:05.710 19:35:52 -- common/autotest_common.sh@940 -- # kill -0 84073 00:15:05.710 19:35:52 -- common/autotest_common.sh@941 -- # uname 00:15:05.710 19:35:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:05.710 19:35:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84073 00:15:05.710 19:35:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:05.710 19:35:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:05.710 killing process with pid 84073 00:15:05.710 19:35:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84073' 00:15:05.710 Received shutdown signal, test time was about 10.000000 seconds 00:15:05.710 00:15:05.710 Latency(us) 00:15:05.710 [2024-12-15T19:35:52.606Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.710 [2024-12-15T19:35:52.606Z] =================================================================================================================== 00:15:05.710 [2024-12-15T19:35:52.606Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:05.710 19:35:52 -- common/autotest_common.sh@955 -- # kill 84073 00:15:05.710 19:35:52 -- common/autotest_common.sh@960 -- # wait 84073 00:15:05.969 19:35:52 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:06.228 19:35:53 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 38bc6f1d-8b45-475b-bc68-5610bbdc1ca9 00:15:06.228 19:35:53 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:15:06.487 19:35:53 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:15:06.487 19:35:53 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:15:06.487 19:35:53 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 83474 00:15:06.487 19:35:53 -- target/nvmf_lvs_grow.sh@74 -- # wait 83474 00:15:06.487 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 83474 Killed "${NVMF_APP[@]}" "$@" 00:15:06.487 19:35:53 -- target/nvmf_lvs_grow.sh@74 -- # true 00:15:06.487 19:35:53 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:15:06.487 19:35:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:06.487 19:35:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:06.487 19:35:53 -- common/autotest_common.sh@10 -- # set +x 00:15:06.487 19:35:53 -- nvmf/common.sh@469 -- # nvmfpid=84275 00:15:06.487 19:35:53 -- nvmf/common.sh@470 -- # waitforlisten 84275 00:15:06.487 19:35:53 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:06.487 19:35:53 -- common/autotest_common.sh@829 -- # '[' -z 84275 ']' 00:15:06.487 19:35:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.487 19:35:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:06.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.487 19:35:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.487 19:35:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:06.487 19:35:53 -- common/autotest_common.sh@10 -- # set +x 00:15:06.746 [2024-12-15 19:35:53.399754] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:06.746 [2024-12-15 19:35:53.400495] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:06.746 [2024-12-15 19:35:53.534921] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.746 [2024-12-15 19:35:53.605567] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:06.746 [2024-12-15 19:35:53.605713] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:06.746 [2024-12-15 19:35:53.605726] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:06.746 [2024-12-15 19:35:53.605735] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:06.746 [2024-12-15 19:35:53.605766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.682 19:35:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:07.682 19:35:54 -- common/autotest_common.sh@862 -- # return 0 00:15:07.683 19:35:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:07.683 19:35:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:07.683 19:35:54 -- common/autotest_common.sh@10 -- # set +x 00:15:07.683 19:35:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:07.683 19:35:54 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:07.941 [2024-12-15 19:35:54.639368] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:07.941 [2024-12-15 19:35:54.639785] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:07.941 [2024-12-15 19:35:54.640073] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:07.941 19:35:54 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:15:07.941 19:35:54 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 863f9660-df48-4687-a0e3-df4fe5072f85 00:15:07.941 19:35:54 -- common/autotest_common.sh@897 -- # local bdev_name=863f9660-df48-4687-a0e3-df4fe5072f85 00:15:07.941 19:35:54 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:07.941 19:35:54 -- common/autotest_common.sh@899 -- # local i 00:15:07.941 19:35:54 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:07.941 19:35:54 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:07.941 19:35:54 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:08.200 19:35:54 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 863f9660-df48-4687-a0e3-df4fe5072f85 -t 2000 00:15:08.458 [ 00:15:08.458 { 00:15:08.458 "aliases": [ 00:15:08.458 "lvs/lvol" 00:15:08.458 ], 00:15:08.458 "assigned_rate_limits": { 00:15:08.458 "r_mbytes_per_sec": 0, 00:15:08.458 "rw_ios_per_sec": 0, 00:15:08.458 "rw_mbytes_per_sec": 0, 00:15:08.458 "w_mbytes_per_sec": 0 00:15:08.458 }, 00:15:08.458 "block_size": 4096, 00:15:08.458 "claimed": false, 00:15:08.458 "driver_specific": { 00:15:08.458 "lvol": { 00:15:08.458 "base_bdev": "aio_bdev", 00:15:08.458 "clone": false, 00:15:08.458 "esnap_clone": false, 00:15:08.458 "lvol_store_uuid": "38bc6f1d-8b45-475b-bc68-5610bbdc1ca9", 00:15:08.458 "snapshot": false, 00:15:08.458 "thin_provision": false 00:15:08.458 } 00:15:08.458 }, 00:15:08.458 "name": "863f9660-df48-4687-a0e3-df4fe5072f85", 00:15:08.458 "num_blocks": 38912, 00:15:08.458 "product_name": "Logical Volume", 00:15:08.458 "supported_io_types": { 00:15:08.458 "abort": false, 00:15:08.458 "compare": false, 00:15:08.458 "compare_and_write": false, 00:15:08.458 "flush": false, 00:15:08.458 "nvme_admin": false, 00:15:08.458 "nvme_io": false, 00:15:08.458 "read": true, 00:15:08.458 "reset": true, 00:15:08.458 "unmap": true, 00:15:08.458 "write": true, 00:15:08.458 "write_zeroes": true 00:15:08.458 }, 00:15:08.458 "uuid": "863f9660-df48-4687-a0e3-df4fe5072f85", 00:15:08.458 "zoned": false 00:15:08.458 } 00:15:08.458 ] 00:15:08.458 19:35:55 -- common/autotest_common.sh@905 -- # return 0 00:15:08.458 19:35:55 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 38bc6f1d-8b45-475b-bc68-5610bbdc1ca9 00:15:08.458 19:35:55 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:15:08.716 19:35:55 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:15:08.716 19:35:55 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:15:08.716 19:35:55 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 38bc6f1d-8b45-475b-bc68-5610bbdc1ca9 00:15:08.975 19:35:55 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:15:08.975 19:35:55 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:08.975 [2024-12-15 19:35:55.864886] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:09.234 19:35:55 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 38bc6f1d-8b45-475b-bc68-5610bbdc1ca9 00:15:09.234 19:35:55 -- common/autotest_common.sh@650 -- # local es=0 00:15:09.234 19:35:55 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 38bc6f1d-8b45-475b-bc68-5610bbdc1ca9 00:15:09.234 19:35:55 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:09.234 19:35:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:09.234 19:35:55 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:09.234 19:35:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:09.234 19:35:55 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:09.234 19:35:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:09.234 19:35:55 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:09.234 19:35:55 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:09.234 19:35:55 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 38bc6f1d-8b45-475b-bc68-5610bbdc1ca9 00:15:09.493 2024/12/15 19:35:56 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:38bc6f1d-8b45-475b-bc68-5610bbdc1ca9], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:15:09.493 request: 00:15:09.493 { 00:15:09.493 "method": "bdev_lvol_get_lvstores", 00:15:09.493 "params": { 00:15:09.493 "uuid": "38bc6f1d-8b45-475b-bc68-5610bbdc1ca9" 00:15:09.493 } 00:15:09.493 } 00:15:09.493 Got JSON-RPC error response 00:15:09.493 GoRPCClient: error on JSON-RPC call 00:15:09.493 19:35:56 -- common/autotest_common.sh@653 -- # es=1 00:15:09.493 19:35:56 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:09.493 19:35:56 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:09.493 19:35:56 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:09.493 19:35:56 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:09.752 aio_bdev 00:15:09.752 19:35:56 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 863f9660-df48-4687-a0e3-df4fe5072f85 00:15:09.752 19:35:56 -- common/autotest_common.sh@897 -- # local bdev_name=863f9660-df48-4687-a0e3-df4fe5072f85 00:15:09.752 19:35:56 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:09.752 19:35:56 -- common/autotest_common.sh@899 -- # local i 00:15:09.752 19:35:56 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:09.752 19:35:56 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:09.752 19:35:56 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:10.011 19:35:56 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 863f9660-df48-4687-a0e3-df4fe5072f85 -t 2000 00:15:10.011 [ 00:15:10.011 { 00:15:10.011 "aliases": [ 00:15:10.011 "lvs/lvol" 00:15:10.011 ], 00:15:10.011 "assigned_rate_limits": { 00:15:10.011 "r_mbytes_per_sec": 0, 00:15:10.011 "rw_ios_per_sec": 0, 00:15:10.011 "rw_mbytes_per_sec": 0, 00:15:10.011 "w_mbytes_per_sec": 0 00:15:10.011 }, 00:15:10.011 "block_size": 4096, 00:15:10.011 "claimed": false, 00:15:10.011 "driver_specific": { 00:15:10.011 "lvol": { 00:15:10.011 "base_bdev": "aio_bdev", 00:15:10.011 "clone": false, 00:15:10.011 "esnap_clone": false, 00:15:10.011 "lvol_store_uuid": "38bc6f1d-8b45-475b-bc68-5610bbdc1ca9", 00:15:10.011 "snapshot": false, 00:15:10.011 "thin_provision": false 00:15:10.011 } 00:15:10.011 }, 00:15:10.011 "name": "863f9660-df48-4687-a0e3-df4fe5072f85", 00:15:10.011 "num_blocks": 38912, 00:15:10.011 "product_name": "Logical Volume", 00:15:10.011 "supported_io_types": { 00:15:10.011 "abort": false, 00:15:10.011 "compare": false, 00:15:10.011 "compare_and_write": false, 00:15:10.011 "flush": false, 00:15:10.011 "nvme_admin": false, 00:15:10.011 "nvme_io": false, 00:15:10.011 "read": true, 00:15:10.011 "reset": true, 00:15:10.011 "unmap": true, 00:15:10.011 "write": true, 00:15:10.011 "write_zeroes": true 00:15:10.011 }, 00:15:10.011 "uuid": "863f9660-df48-4687-a0e3-df4fe5072f85", 00:15:10.011 "zoned": false 00:15:10.011 } 00:15:10.011 ] 00:15:10.011 19:35:56 -- common/autotest_common.sh@905 -- # return 0 00:15:10.011 19:35:56 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:15:10.011 19:35:56 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 38bc6f1d-8b45-475b-bc68-5610bbdc1ca9 00:15:10.270 19:35:57 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:15:10.270 19:35:57 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 38bc6f1d-8b45-475b-bc68-5610bbdc1ca9 00:15:10.270 19:35:57 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:15:10.529 19:35:57 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:15:10.529 19:35:57 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 863f9660-df48-4687-a0e3-df4fe5072f85 00:15:10.789 19:35:57 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 38bc6f1d-8b45-475b-bc68-5610bbdc1ca9 00:15:11.067 19:35:57 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:11.338 19:35:58 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:11.905 00:15:11.905 real 0m20.278s 00:15:11.905 user 0m40.595s 00:15:11.905 sys 0m9.129s 00:15:11.905 19:35:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:11.905 19:35:58 -- common/autotest_common.sh@10 -- # set +x 00:15:11.905 ************************************ 00:15:11.905 END TEST lvs_grow_dirty 00:15:11.905 ************************************ 00:15:11.905 19:35:58 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:11.905 19:35:58 -- common/autotest_common.sh@806 -- # type=--id 00:15:11.905 19:35:58 -- common/autotest_common.sh@807 -- # id=0 00:15:11.905 19:35:58 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:11.905 19:35:58 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:11.905 19:35:58 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:11.905 19:35:58 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:11.905 19:35:58 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:11.905 19:35:58 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:11.905 nvmf_trace.0 00:15:11.905 19:35:58 -- common/autotest_common.sh@821 -- # return 0 00:15:11.905 19:35:58 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:11.905 19:35:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:11.905 19:35:58 -- nvmf/common.sh@116 -- # sync 00:15:11.905 19:35:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:11.905 19:35:58 -- nvmf/common.sh@119 -- # set +e 00:15:11.905 19:35:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:11.905 19:35:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:11.905 rmmod nvme_tcp 00:15:11.905 rmmod nvme_fabrics 00:15:11.905 rmmod nvme_keyring 00:15:12.164 19:35:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:12.164 19:35:58 -- nvmf/common.sh@123 -- # set -e 00:15:12.164 19:35:58 -- nvmf/common.sh@124 -- # return 0 00:15:12.164 19:35:58 -- nvmf/common.sh@477 -- # '[' -n 84275 ']' 00:15:12.164 19:35:58 -- nvmf/common.sh@478 -- # killprocess 84275 00:15:12.164 19:35:58 -- common/autotest_common.sh@936 -- # '[' -z 84275 ']' 00:15:12.164 19:35:58 -- common/autotest_common.sh@940 -- # kill -0 84275 00:15:12.164 19:35:58 -- common/autotest_common.sh@941 -- # uname 00:15:12.164 19:35:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:12.164 19:35:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84275 00:15:12.164 19:35:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:12.164 19:35:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:12.164 killing process with pid 84275 00:15:12.164 19:35:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84275' 00:15:12.164 19:35:58 -- common/autotest_common.sh@955 -- # kill 84275 00:15:12.164 19:35:58 -- common/autotest_common.sh@960 -- # wait 84275 00:15:12.423 19:35:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:12.423 19:35:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:12.423 19:35:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:12.423 19:35:59 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:12.423 19:35:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:12.423 19:35:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.423 19:35:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:12.423 19:35:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.423 19:35:59 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:12.423 00:15:12.423 real 0m40.917s 00:15:12.423 user 1m4.209s 00:15:12.423 sys 0m12.208s 00:15:12.423 19:35:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:12.423 19:35:59 -- common/autotest_common.sh@10 -- # set +x 00:15:12.423 ************************************ 00:15:12.423 END TEST nvmf_lvs_grow 00:15:12.423 ************************************ 00:15:12.423 19:35:59 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:12.423 19:35:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:12.423 19:35:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:12.423 19:35:59 -- common/autotest_common.sh@10 -- # set +x 00:15:12.423 ************************************ 00:15:12.423 START TEST nvmf_bdev_io_wait 00:15:12.423 ************************************ 00:15:12.423 19:35:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:12.423 * Looking for test storage... 00:15:12.423 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:12.423 19:35:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:12.423 19:35:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:12.423 19:35:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:12.682 19:35:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:12.682 19:35:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:12.682 19:35:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:12.682 19:35:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:12.682 19:35:59 -- scripts/common.sh@335 -- # IFS=.-: 00:15:12.682 19:35:59 -- scripts/common.sh@335 -- # read -ra ver1 00:15:12.682 19:35:59 -- scripts/common.sh@336 -- # IFS=.-: 00:15:12.682 19:35:59 -- scripts/common.sh@336 -- # read -ra ver2 00:15:12.682 19:35:59 -- scripts/common.sh@337 -- # local 'op=<' 00:15:12.682 19:35:59 -- scripts/common.sh@339 -- # ver1_l=2 00:15:12.682 19:35:59 -- scripts/common.sh@340 -- # ver2_l=1 00:15:12.682 19:35:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:12.682 19:35:59 -- scripts/common.sh@343 -- # case "$op" in 00:15:12.682 19:35:59 -- scripts/common.sh@344 -- # : 1 00:15:12.682 19:35:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:12.682 19:35:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:12.682 19:35:59 -- scripts/common.sh@364 -- # decimal 1 00:15:12.682 19:35:59 -- scripts/common.sh@352 -- # local d=1 00:15:12.682 19:35:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:12.682 19:35:59 -- scripts/common.sh@354 -- # echo 1 00:15:12.682 19:35:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:12.682 19:35:59 -- scripts/common.sh@365 -- # decimal 2 00:15:12.682 19:35:59 -- scripts/common.sh@352 -- # local d=2 00:15:12.682 19:35:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:12.682 19:35:59 -- scripts/common.sh@354 -- # echo 2 00:15:12.682 19:35:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:12.682 19:35:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:12.682 19:35:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:12.682 19:35:59 -- scripts/common.sh@367 -- # return 0 00:15:12.682 19:35:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:12.682 19:35:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:12.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.682 --rc genhtml_branch_coverage=1 00:15:12.682 --rc genhtml_function_coverage=1 00:15:12.682 --rc genhtml_legend=1 00:15:12.682 --rc geninfo_all_blocks=1 00:15:12.682 --rc geninfo_unexecuted_blocks=1 00:15:12.682 00:15:12.682 ' 00:15:12.682 19:35:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:12.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.682 --rc genhtml_branch_coverage=1 00:15:12.682 --rc genhtml_function_coverage=1 00:15:12.682 --rc genhtml_legend=1 00:15:12.682 --rc geninfo_all_blocks=1 00:15:12.682 --rc geninfo_unexecuted_blocks=1 00:15:12.682 00:15:12.682 ' 00:15:12.682 19:35:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:12.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.682 --rc genhtml_branch_coverage=1 00:15:12.682 --rc genhtml_function_coverage=1 00:15:12.682 --rc genhtml_legend=1 00:15:12.682 --rc geninfo_all_blocks=1 00:15:12.682 --rc geninfo_unexecuted_blocks=1 00:15:12.682 00:15:12.682 ' 00:15:12.682 19:35:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:12.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.682 --rc genhtml_branch_coverage=1 00:15:12.682 --rc genhtml_function_coverage=1 00:15:12.682 --rc genhtml_legend=1 00:15:12.682 --rc geninfo_all_blocks=1 00:15:12.682 --rc geninfo_unexecuted_blocks=1 00:15:12.682 00:15:12.682 ' 00:15:12.682 19:35:59 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:12.682 19:35:59 -- nvmf/common.sh@7 -- # uname -s 00:15:12.682 19:35:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:12.682 19:35:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:12.682 19:35:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:12.682 19:35:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:12.682 19:35:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:12.682 19:35:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:12.682 19:35:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:12.682 19:35:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:12.682 19:35:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:12.682 19:35:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:12.682 19:35:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:15:12.683 19:35:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:15:12.683 19:35:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:12.683 19:35:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:12.683 19:35:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:12.683 19:35:59 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:12.683 19:35:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:12.683 19:35:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:12.683 19:35:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:12.683 19:35:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.683 19:35:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.683 19:35:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.683 19:35:59 -- paths/export.sh@5 -- # export PATH 00:15:12.683 19:35:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.683 19:35:59 -- nvmf/common.sh@46 -- # : 0 00:15:12.683 19:35:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:12.683 19:35:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:12.683 19:35:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:12.683 19:35:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:12.683 19:35:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:12.683 19:35:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:12.683 19:35:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:12.683 19:35:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:12.683 19:35:59 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:12.683 19:35:59 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:12.683 19:35:59 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:12.683 19:35:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:12.683 19:35:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:12.683 19:35:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:12.683 19:35:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:12.683 19:35:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:12.683 19:35:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.683 19:35:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:12.683 19:35:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.683 19:35:59 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:12.683 19:35:59 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:12.683 19:35:59 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:12.683 19:35:59 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:12.683 19:35:59 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:12.683 19:35:59 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:12.683 19:35:59 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:12.683 19:35:59 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:12.683 19:35:59 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:12.683 19:35:59 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:12.683 19:35:59 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:12.683 19:35:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:12.683 19:35:59 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:12.683 19:35:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:12.683 19:35:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:12.683 19:35:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:12.683 19:35:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:12.683 19:35:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:12.683 19:35:59 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:12.683 19:35:59 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:12.683 Cannot find device "nvmf_tgt_br" 00:15:12.683 19:35:59 -- nvmf/common.sh@154 -- # true 00:15:12.683 19:35:59 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:12.683 Cannot find device "nvmf_tgt_br2" 00:15:12.683 19:35:59 -- nvmf/common.sh@155 -- # true 00:15:12.683 19:35:59 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:12.683 19:35:59 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:12.683 Cannot find device "nvmf_tgt_br" 00:15:12.683 19:35:59 -- nvmf/common.sh@157 -- # true 00:15:12.683 19:35:59 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:12.683 Cannot find device "nvmf_tgt_br2" 00:15:12.683 19:35:59 -- nvmf/common.sh@158 -- # true 00:15:12.683 19:35:59 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:12.683 19:35:59 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:12.683 19:35:59 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:12.683 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:12.683 19:35:59 -- nvmf/common.sh@161 -- # true 00:15:12.683 19:35:59 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:12.683 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:12.683 19:35:59 -- nvmf/common.sh@162 -- # true 00:15:12.683 19:35:59 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:12.683 19:35:59 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:12.683 19:35:59 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:12.942 19:35:59 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:12.942 19:35:59 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:12.942 19:35:59 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:12.942 19:35:59 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:12.942 19:35:59 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:12.942 19:35:59 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:12.942 19:35:59 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:12.942 19:35:59 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:12.942 19:35:59 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:12.942 19:35:59 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:12.942 19:35:59 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:12.942 19:35:59 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:12.942 19:35:59 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:12.942 19:35:59 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:12.942 19:35:59 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:12.943 19:35:59 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:12.943 19:35:59 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:12.943 19:35:59 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:12.943 19:35:59 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:12.943 19:35:59 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:12.943 19:35:59 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:12.943 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:12.943 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:15:12.943 00:15:12.943 --- 10.0.0.2 ping statistics --- 00:15:12.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.943 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:15:12.943 19:35:59 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:12.943 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:12.943 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:15:12.943 00:15:12.943 --- 10.0.0.3 ping statistics --- 00:15:12.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.943 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:15:12.943 19:35:59 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:12.943 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:12.943 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:15:12.943 00:15:12.943 --- 10.0.0.1 ping statistics --- 00:15:12.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.943 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:15:12.943 19:35:59 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:12.943 19:35:59 -- nvmf/common.sh@421 -- # return 0 00:15:12.943 19:35:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:12.943 19:35:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:12.943 19:35:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:12.943 19:35:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:12.943 19:35:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:12.943 19:35:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:12.943 19:35:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:12.943 19:35:59 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:12.943 19:35:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:12.943 19:35:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:12.943 19:35:59 -- common/autotest_common.sh@10 -- # set +x 00:15:12.943 19:35:59 -- nvmf/common.sh@469 -- # nvmfpid=84694 00:15:12.943 19:35:59 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:12.943 19:35:59 -- nvmf/common.sh@470 -- # waitforlisten 84694 00:15:12.943 19:35:59 -- common/autotest_common.sh@829 -- # '[' -z 84694 ']' 00:15:12.943 19:35:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.943 19:35:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:12.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.943 19:35:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.943 19:35:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:12.943 19:35:59 -- common/autotest_common.sh@10 -- # set +x 00:15:12.943 [2024-12-15 19:35:59.814037] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:12.943 [2024-12-15 19:35:59.814688] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:13.202 [2024-12-15 19:35:59.950558] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:13.202 [2024-12-15 19:36:00.051579] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:13.202 [2024-12-15 19:36:00.051724] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:13.202 [2024-12-15 19:36:00.051735] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:13.202 [2024-12-15 19:36:00.051743] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:13.202 [2024-12-15 19:36:00.051930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:13.202 [2024-12-15 19:36:00.052433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:13.202 [2024-12-15 19:36:00.052489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:13.202 [2024-12-15 19:36:00.052496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.138 19:36:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:14.138 19:36:00 -- common/autotest_common.sh@862 -- # return 0 00:15:14.138 19:36:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:14.138 19:36:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:14.138 19:36:00 -- common/autotest_common.sh@10 -- # set +x 00:15:14.138 19:36:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:14.138 19:36:00 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:14.138 19:36:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.138 19:36:00 -- common/autotest_common.sh@10 -- # set +x 00:15:14.138 19:36:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.138 19:36:00 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:14.138 19:36:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.138 19:36:00 -- common/autotest_common.sh@10 -- # set +x 00:15:14.138 19:36:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.138 19:36:00 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:14.138 19:36:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.138 19:36:00 -- common/autotest_common.sh@10 -- # set +x 00:15:14.138 [2024-12-15 19:36:00.960180] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:14.138 19:36:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.138 19:36:00 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:14.138 19:36:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.138 19:36:00 -- common/autotest_common.sh@10 -- # set +x 00:15:14.138 Malloc0 00:15:14.138 19:36:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.138 19:36:00 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:14.138 19:36:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.138 19:36:00 -- common/autotest_common.sh@10 -- # set +x 00:15:14.138 19:36:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.138 19:36:01 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:14.138 19:36:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.138 19:36:01 -- common/autotest_common.sh@10 -- # set +x 00:15:14.138 19:36:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.138 19:36:01 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:14.138 19:36:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.138 19:36:01 -- common/autotest_common.sh@10 -- # set +x 00:15:14.139 [2024-12-15 19:36:01.021607] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:14.139 19:36:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.139 19:36:01 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=84753 00:15:14.139 19:36:01 -- target/bdev_io_wait.sh@30 -- # READ_PID=84755 00:15:14.139 19:36:01 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:14.139 19:36:01 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:14.139 19:36:01 -- nvmf/common.sh@520 -- # config=() 00:15:14.139 19:36:01 -- nvmf/common.sh@520 -- # local subsystem config 00:15:14.139 19:36:01 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:14.139 19:36:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:14.139 19:36:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:14.139 { 00:15:14.139 "params": { 00:15:14.139 "name": "Nvme$subsystem", 00:15:14.139 "trtype": "$TEST_TRANSPORT", 00:15:14.139 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:14.139 "adrfam": "ipv4", 00:15:14.139 "trsvcid": "$NVMF_PORT", 00:15:14.139 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:14.139 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:14.139 "hdgst": ${hdgst:-false}, 00:15:14.139 "ddgst": ${ddgst:-false} 00:15:14.139 }, 00:15:14.139 "method": "bdev_nvme_attach_controller" 00:15:14.139 } 00:15:14.139 EOF 00:15:14.139 )") 00:15:14.139 19:36:01 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:14.139 19:36:01 -- nvmf/common.sh@520 -- # config=() 00:15:14.139 19:36:01 -- nvmf/common.sh@520 -- # local subsystem config 00:15:14.139 19:36:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:14.139 19:36:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:14.139 { 00:15:14.139 "params": { 00:15:14.139 "name": "Nvme$subsystem", 00:15:14.139 "trtype": "$TEST_TRANSPORT", 00:15:14.139 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:14.139 "adrfam": "ipv4", 00:15:14.139 "trsvcid": "$NVMF_PORT", 00:15:14.139 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:14.139 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:14.139 "hdgst": ${hdgst:-false}, 00:15:14.139 "ddgst": ${ddgst:-false} 00:15:14.139 }, 00:15:14.139 "method": "bdev_nvme_attach_controller" 00:15:14.139 } 00:15:14.139 EOF 00:15:14.139 )") 00:15:14.139 19:36:01 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=84757 00:15:14.139 19:36:01 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:14.139 19:36:01 -- nvmf/common.sh@542 -- # cat 00:15:14.398 19:36:01 -- nvmf/common.sh@542 -- # cat 00:15:14.398 19:36:01 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:14.398 19:36:01 -- nvmf/common.sh@520 -- # config=() 00:15:14.398 19:36:01 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=84761 00:15:14.398 19:36:01 -- nvmf/common.sh@520 -- # local subsystem config 00:15:14.398 19:36:01 -- target/bdev_io_wait.sh@35 -- # sync 00:15:14.398 19:36:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:14.398 19:36:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:14.398 { 00:15:14.398 "params": { 00:15:14.398 "name": "Nvme$subsystem", 00:15:14.398 "trtype": "$TEST_TRANSPORT", 00:15:14.398 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:14.398 "adrfam": "ipv4", 00:15:14.398 "trsvcid": "$NVMF_PORT", 00:15:14.398 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:14.398 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:14.398 "hdgst": ${hdgst:-false}, 00:15:14.398 "ddgst": ${ddgst:-false} 00:15:14.398 }, 00:15:14.398 "method": "bdev_nvme_attach_controller" 00:15:14.398 } 00:15:14.398 EOF 00:15:14.398 )") 00:15:14.398 19:36:01 -- nvmf/common.sh@544 -- # jq . 00:15:14.398 19:36:01 -- nvmf/common.sh@542 -- # cat 00:15:14.398 19:36:01 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:14.398 19:36:01 -- nvmf/common.sh@520 -- # config=() 00:15:14.398 19:36:01 -- nvmf/common.sh@520 -- # local subsystem config 00:15:14.398 19:36:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:14.398 19:36:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:14.398 { 00:15:14.398 "params": { 00:15:14.398 "name": "Nvme$subsystem", 00:15:14.398 "trtype": "$TEST_TRANSPORT", 00:15:14.398 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:14.398 "adrfam": "ipv4", 00:15:14.398 "trsvcid": "$NVMF_PORT", 00:15:14.398 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:14.398 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:14.398 "hdgst": ${hdgst:-false}, 00:15:14.398 "ddgst": ${ddgst:-false} 00:15:14.398 }, 00:15:14.398 "method": "bdev_nvme_attach_controller" 00:15:14.398 } 00:15:14.398 EOF 00:15:14.398 )") 00:15:14.398 19:36:01 -- nvmf/common.sh@545 -- # IFS=, 00:15:14.398 19:36:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:14.398 "params": { 00:15:14.398 "name": "Nvme1", 00:15:14.398 "trtype": "tcp", 00:15:14.398 "traddr": "10.0.0.2", 00:15:14.398 "adrfam": "ipv4", 00:15:14.398 "trsvcid": "4420", 00:15:14.398 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:14.398 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:14.398 "hdgst": false, 00:15:14.398 "ddgst": false 00:15:14.398 }, 00:15:14.398 "method": "bdev_nvme_attach_controller" 00:15:14.398 }' 00:15:14.398 19:36:01 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:14.398 19:36:01 -- nvmf/common.sh@542 -- # cat 00:15:14.398 19:36:01 -- nvmf/common.sh@544 -- # jq . 00:15:14.398 19:36:01 -- nvmf/common.sh@544 -- # jq . 00:15:14.398 19:36:01 -- nvmf/common.sh@545 -- # IFS=, 00:15:14.398 19:36:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:14.398 "params": { 00:15:14.398 "name": "Nvme1", 00:15:14.398 "trtype": "tcp", 00:15:14.398 "traddr": "10.0.0.2", 00:15:14.398 "adrfam": "ipv4", 00:15:14.398 "trsvcid": "4420", 00:15:14.398 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:14.398 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:14.398 "hdgst": false, 00:15:14.398 "ddgst": false 00:15:14.398 }, 00:15:14.398 "method": "bdev_nvme_attach_controller" 00:15:14.398 }' 00:15:14.398 19:36:01 -- nvmf/common.sh@544 -- # jq . 00:15:14.398 19:36:01 -- nvmf/common.sh@545 -- # IFS=, 00:15:14.398 19:36:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:14.398 "params": { 00:15:14.398 "name": "Nvme1", 00:15:14.398 "trtype": "tcp", 00:15:14.398 "traddr": "10.0.0.2", 00:15:14.398 "adrfam": "ipv4", 00:15:14.398 "trsvcid": "4420", 00:15:14.398 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:14.398 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:14.398 "hdgst": false, 00:15:14.398 "ddgst": false 00:15:14.398 }, 00:15:14.398 "method": "bdev_nvme_attach_controller" 00:15:14.398 }' 00:15:14.398 19:36:01 -- nvmf/common.sh@545 -- # IFS=, 00:15:14.398 19:36:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:14.398 "params": { 00:15:14.398 "name": "Nvme1", 00:15:14.398 "trtype": "tcp", 00:15:14.398 "traddr": "10.0.0.2", 00:15:14.398 "adrfam": "ipv4", 00:15:14.398 "trsvcid": "4420", 00:15:14.398 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:14.398 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:14.398 "hdgst": false, 00:15:14.398 "ddgst": false 00:15:14.398 }, 00:15:14.398 "method": "bdev_nvme_attach_controller" 00:15:14.398 }' 00:15:14.398 [2024-12-15 19:36:01.090554] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:14.398 [2024-12-15 19:36:01.091440] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:14.398 [2024-12-15 19:36:01.099432] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:14.398 [2024-12-15 19:36:01.099557] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:14.398 [2024-12-15 19:36:01.111325] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:14.398 [2024-12-15 19:36:01.111404] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:14.398 [2024-12-15 19:36:01.123054] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:14.398 [2024-12-15 19:36:01.123147] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:14.398 19:36:01 -- target/bdev_io_wait.sh@37 -- # wait 84753 00:15:14.657 [2024-12-15 19:36:01.351683] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.657 [2024-12-15 19:36:01.425149] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.657 [2024-12-15 19:36:01.448533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:15:14.657 [2024-12-15 19:36:01.520122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:14.657 [2024-12-15 19:36:01.525963] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.916 [2024-12-15 19:36:01.620833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:14.916 [2024-12-15 19:36:01.627763] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.916 [2024-12-15 19:36:01.690208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:14.916 Running I/O for 1 seconds... 00:15:14.916 Running I/O for 1 seconds... 00:15:14.916 Running I/O for 1 seconds... 00:15:15.174 Running I/O for 1 seconds... 00:15:16.110 00:15:16.110 Latency(us) 00:15:16.110 [2024-12-15T19:36:03.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.110 [2024-12-15T19:36:03.006Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:16.110 Nvme1n1 : 1.01 11593.06 45.29 0.00 0.00 11006.05 5213.09 28955.00 00:15:16.110 [2024-12-15T19:36:03.007Z] =================================================================================================================== 00:15:16.111 [2024-12-15T19:36:03.007Z] Total : 11593.06 45.29 0.00 0.00 11006.05 5213.09 28955.00 00:15:16.111 00:15:16.111 Latency(us) 00:15:16.111 [2024-12-15T19:36:03.007Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.111 [2024-12-15T19:36:03.007Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:16.111 Nvme1n1 : 1.01 8230.88 32.15 0.00 0.00 15472.65 9472.93 25737.77 00:15:16.111 [2024-12-15T19:36:03.007Z] =================================================================================================================== 00:15:16.111 [2024-12-15T19:36:03.007Z] Total : 8230.88 32.15 0.00 0.00 15472.65 9472.93 25737.77 00:15:16.111 00:15:16.111 Latency(us) 00:15:16.111 [2024-12-15T19:36:03.007Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.111 [2024-12-15T19:36:03.007Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:16.111 Nvme1n1 : 1.00 203258.41 793.98 0.00 0.00 627.37 255.07 733.56 00:15:16.111 [2024-12-15T19:36:03.007Z] =================================================================================================================== 00:15:16.111 [2024-12-15T19:36:03.007Z] Total : 203258.41 793.98 0.00 0.00 627.37 255.07 733.56 00:15:16.111 00:15:16.111 Latency(us) 00:15:16.111 [2024-12-15T19:36:03.007Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.111 [2024-12-15T19:36:03.007Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:16.111 Nvme1n1 : 1.01 8534.96 33.34 0.00 0.00 14935.54 4110.89 23473.80 00:15:16.111 [2024-12-15T19:36:03.007Z] =================================================================================================================== 00:15:16.111 [2024-12-15T19:36:03.007Z] Total : 8534.96 33.34 0.00 0.00 14935.54 4110.89 23473.80 00:15:16.370 19:36:03 -- target/bdev_io_wait.sh@38 -- # wait 84755 00:15:16.628 19:36:03 -- target/bdev_io_wait.sh@39 -- # wait 84757 00:15:16.628 19:36:03 -- target/bdev_io_wait.sh@40 -- # wait 84761 00:15:16.628 19:36:03 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:16.628 19:36:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.628 19:36:03 -- common/autotest_common.sh@10 -- # set +x 00:15:16.628 19:36:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.628 19:36:03 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:16.628 19:36:03 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:16.628 19:36:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:16.628 19:36:03 -- nvmf/common.sh@116 -- # sync 00:15:16.628 19:36:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:16.628 19:36:03 -- nvmf/common.sh@119 -- # set +e 00:15:16.628 19:36:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:16.628 19:36:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:16.628 rmmod nvme_tcp 00:15:16.628 rmmod nvme_fabrics 00:15:16.628 rmmod nvme_keyring 00:15:16.628 19:36:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:16.628 19:36:03 -- nvmf/common.sh@123 -- # set -e 00:15:16.628 19:36:03 -- nvmf/common.sh@124 -- # return 0 00:15:16.628 19:36:03 -- nvmf/common.sh@477 -- # '[' -n 84694 ']' 00:15:16.628 19:36:03 -- nvmf/common.sh@478 -- # killprocess 84694 00:15:16.628 19:36:03 -- common/autotest_common.sh@936 -- # '[' -z 84694 ']' 00:15:16.628 19:36:03 -- common/autotest_common.sh@940 -- # kill -0 84694 00:15:16.628 19:36:03 -- common/autotest_common.sh@941 -- # uname 00:15:16.628 19:36:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:16.628 19:36:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84694 00:15:16.628 killing process with pid 84694 00:15:16.628 19:36:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:16.628 19:36:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:16.628 19:36:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84694' 00:15:16.628 19:36:03 -- common/autotest_common.sh@955 -- # kill 84694 00:15:16.628 19:36:03 -- common/autotest_common.sh@960 -- # wait 84694 00:15:16.887 19:36:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:16.887 19:36:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:16.887 19:36:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:16.887 19:36:03 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:16.887 19:36:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:16.887 19:36:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.887 19:36:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.887 19:36:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.887 19:36:03 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:16.887 00:15:16.887 real 0m4.503s 00:15:16.887 user 0m19.367s 00:15:16.887 sys 0m2.408s 00:15:16.887 19:36:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:16.887 19:36:03 -- common/autotest_common.sh@10 -- # set +x 00:15:16.887 ************************************ 00:15:16.887 END TEST nvmf_bdev_io_wait 00:15:16.887 ************************************ 00:15:16.887 19:36:03 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:16.887 19:36:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:16.887 19:36:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:16.887 19:36:03 -- common/autotest_common.sh@10 -- # set +x 00:15:16.887 ************************************ 00:15:16.887 START TEST nvmf_queue_depth 00:15:16.887 ************************************ 00:15:16.887 19:36:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:17.146 * Looking for test storage... 00:15:17.146 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:17.146 19:36:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:17.146 19:36:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:17.146 19:36:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:17.146 19:36:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:17.146 19:36:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:17.146 19:36:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:17.146 19:36:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:17.146 19:36:03 -- scripts/common.sh@335 -- # IFS=.-: 00:15:17.146 19:36:03 -- scripts/common.sh@335 -- # read -ra ver1 00:15:17.146 19:36:03 -- scripts/common.sh@336 -- # IFS=.-: 00:15:17.146 19:36:03 -- scripts/common.sh@336 -- # read -ra ver2 00:15:17.146 19:36:03 -- scripts/common.sh@337 -- # local 'op=<' 00:15:17.146 19:36:03 -- scripts/common.sh@339 -- # ver1_l=2 00:15:17.146 19:36:03 -- scripts/common.sh@340 -- # ver2_l=1 00:15:17.146 19:36:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:17.146 19:36:03 -- scripts/common.sh@343 -- # case "$op" in 00:15:17.146 19:36:03 -- scripts/common.sh@344 -- # : 1 00:15:17.146 19:36:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:17.146 19:36:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:17.146 19:36:03 -- scripts/common.sh@364 -- # decimal 1 00:15:17.146 19:36:03 -- scripts/common.sh@352 -- # local d=1 00:15:17.146 19:36:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:17.146 19:36:03 -- scripts/common.sh@354 -- # echo 1 00:15:17.146 19:36:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:17.146 19:36:03 -- scripts/common.sh@365 -- # decimal 2 00:15:17.146 19:36:03 -- scripts/common.sh@352 -- # local d=2 00:15:17.146 19:36:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:17.146 19:36:03 -- scripts/common.sh@354 -- # echo 2 00:15:17.146 19:36:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:17.146 19:36:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:17.146 19:36:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:17.146 19:36:03 -- scripts/common.sh@367 -- # return 0 00:15:17.146 19:36:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:17.146 19:36:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:17.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.146 --rc genhtml_branch_coverage=1 00:15:17.146 --rc genhtml_function_coverage=1 00:15:17.146 --rc genhtml_legend=1 00:15:17.146 --rc geninfo_all_blocks=1 00:15:17.146 --rc geninfo_unexecuted_blocks=1 00:15:17.146 00:15:17.146 ' 00:15:17.146 19:36:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:17.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.146 --rc genhtml_branch_coverage=1 00:15:17.146 --rc genhtml_function_coverage=1 00:15:17.146 --rc genhtml_legend=1 00:15:17.146 --rc geninfo_all_blocks=1 00:15:17.146 --rc geninfo_unexecuted_blocks=1 00:15:17.146 00:15:17.146 ' 00:15:17.146 19:36:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:17.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.146 --rc genhtml_branch_coverage=1 00:15:17.146 --rc genhtml_function_coverage=1 00:15:17.146 --rc genhtml_legend=1 00:15:17.146 --rc geninfo_all_blocks=1 00:15:17.146 --rc geninfo_unexecuted_blocks=1 00:15:17.146 00:15:17.146 ' 00:15:17.146 19:36:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:17.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.146 --rc genhtml_branch_coverage=1 00:15:17.146 --rc genhtml_function_coverage=1 00:15:17.146 --rc genhtml_legend=1 00:15:17.146 --rc geninfo_all_blocks=1 00:15:17.146 --rc geninfo_unexecuted_blocks=1 00:15:17.146 00:15:17.146 ' 00:15:17.146 19:36:03 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:17.146 19:36:03 -- nvmf/common.sh@7 -- # uname -s 00:15:17.146 19:36:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:17.146 19:36:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:17.146 19:36:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:17.146 19:36:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:17.146 19:36:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:17.146 19:36:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:17.146 19:36:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:17.146 19:36:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:17.146 19:36:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:17.146 19:36:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:17.146 19:36:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:15:17.146 19:36:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:15:17.146 19:36:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:17.146 19:36:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:17.146 19:36:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:17.146 19:36:03 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:17.146 19:36:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:17.146 19:36:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:17.147 19:36:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:17.147 19:36:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.147 19:36:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.147 19:36:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.147 19:36:03 -- paths/export.sh@5 -- # export PATH 00:15:17.147 19:36:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.147 19:36:03 -- nvmf/common.sh@46 -- # : 0 00:15:17.147 19:36:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:17.147 19:36:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:17.147 19:36:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:17.147 19:36:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:17.147 19:36:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:17.147 19:36:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:17.147 19:36:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:17.147 19:36:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:17.147 19:36:03 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:17.147 19:36:03 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:17.147 19:36:03 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:17.147 19:36:03 -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:17.147 19:36:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:17.147 19:36:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:17.147 19:36:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:17.147 19:36:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:17.147 19:36:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:17.147 19:36:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.147 19:36:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:17.147 19:36:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.147 19:36:03 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:17.147 19:36:03 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:17.147 19:36:03 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:17.147 19:36:03 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:17.147 19:36:03 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:17.147 19:36:03 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:17.147 19:36:03 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:17.147 19:36:03 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:17.147 19:36:03 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:17.147 19:36:03 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:17.147 19:36:03 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:17.147 19:36:03 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:17.147 19:36:03 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:17.147 19:36:03 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:17.147 19:36:03 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:17.147 19:36:03 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:17.147 19:36:03 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:17.147 19:36:03 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:17.147 19:36:03 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:17.147 19:36:03 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:17.147 Cannot find device "nvmf_tgt_br" 00:15:17.147 19:36:03 -- nvmf/common.sh@154 -- # true 00:15:17.147 19:36:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:17.147 Cannot find device "nvmf_tgt_br2" 00:15:17.147 19:36:03 -- nvmf/common.sh@155 -- # true 00:15:17.147 19:36:03 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:17.147 19:36:03 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:17.147 Cannot find device "nvmf_tgt_br" 00:15:17.147 19:36:03 -- nvmf/common.sh@157 -- # true 00:15:17.147 19:36:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:17.147 Cannot find device "nvmf_tgt_br2" 00:15:17.147 19:36:03 -- nvmf/common.sh@158 -- # true 00:15:17.147 19:36:03 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:17.147 19:36:04 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:17.147 19:36:04 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:17.406 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:17.406 19:36:04 -- nvmf/common.sh@161 -- # true 00:15:17.406 19:36:04 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:17.406 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:17.406 19:36:04 -- nvmf/common.sh@162 -- # true 00:15:17.406 19:36:04 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:17.406 19:36:04 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:17.406 19:36:04 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:17.406 19:36:04 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:17.406 19:36:04 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:17.406 19:36:04 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:17.406 19:36:04 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:17.406 19:36:04 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:17.406 19:36:04 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:17.406 19:36:04 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:17.406 19:36:04 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:17.406 19:36:04 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:17.406 19:36:04 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:17.406 19:36:04 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:17.406 19:36:04 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:17.406 19:36:04 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:17.406 19:36:04 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:17.406 19:36:04 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:17.406 19:36:04 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:17.406 19:36:04 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:17.406 19:36:04 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:17.406 19:36:04 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:17.406 19:36:04 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:17.406 19:36:04 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:17.406 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:17.406 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:15:17.406 00:15:17.406 --- 10.0.0.2 ping statistics --- 00:15:17.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.406 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:15:17.406 19:36:04 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:17.406 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:17.406 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:15:17.406 00:15:17.406 --- 10.0.0.3 ping statistics --- 00:15:17.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.406 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:15:17.406 19:36:04 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:17.406 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:17.406 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:15:17.406 00:15:17.406 --- 10.0.0.1 ping statistics --- 00:15:17.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.406 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:15:17.406 19:36:04 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:17.406 19:36:04 -- nvmf/common.sh@421 -- # return 0 00:15:17.406 19:36:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:17.406 19:36:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:17.406 19:36:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:17.406 19:36:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:17.406 19:36:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:17.406 19:36:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:17.406 19:36:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:17.406 19:36:04 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:17.406 19:36:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:17.406 19:36:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:17.406 19:36:04 -- common/autotest_common.sh@10 -- # set +x 00:15:17.406 19:36:04 -- nvmf/common.sh@469 -- # nvmfpid=84999 00:15:17.406 19:36:04 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:17.406 19:36:04 -- nvmf/common.sh@470 -- # waitforlisten 84999 00:15:17.406 19:36:04 -- common/autotest_common.sh@829 -- # '[' -z 84999 ']' 00:15:17.406 19:36:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.406 19:36:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:17.406 19:36:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.406 19:36:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:17.406 19:36:04 -- common/autotest_common.sh@10 -- # set +x 00:15:17.406 [2024-12-15 19:36:04.294188] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:17.406 [2024-12-15 19:36:04.294283] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.665 [2024-12-15 19:36:04.426248] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.665 [2024-12-15 19:36:04.496707] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:17.665 [2024-12-15 19:36:04.496888] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:17.665 [2024-12-15 19:36:04.496901] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:17.665 [2024-12-15 19:36:04.496910] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:17.665 [2024-12-15 19:36:04.496936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.604 19:36:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:18.604 19:36:05 -- common/autotest_common.sh@862 -- # return 0 00:15:18.604 19:36:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:18.604 19:36:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:18.604 19:36:05 -- common/autotest_common.sh@10 -- # set +x 00:15:18.604 19:36:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:18.604 19:36:05 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:18.604 19:36:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.604 19:36:05 -- common/autotest_common.sh@10 -- # set +x 00:15:18.604 [2024-12-15 19:36:05.383495] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:18.604 19:36:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.604 19:36:05 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:18.604 19:36:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.604 19:36:05 -- common/autotest_common.sh@10 -- # set +x 00:15:18.604 Malloc0 00:15:18.604 19:36:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.604 19:36:05 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:18.604 19:36:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.604 19:36:05 -- common/autotest_common.sh@10 -- # set +x 00:15:18.604 19:36:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.604 19:36:05 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:18.604 19:36:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.604 19:36:05 -- common/autotest_common.sh@10 -- # set +x 00:15:18.604 19:36:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.604 19:36:05 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:18.604 19:36:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.604 19:36:05 -- common/autotest_common.sh@10 -- # set +x 00:15:18.604 [2024-12-15 19:36:05.450112] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:18.604 19:36:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.604 19:36:05 -- target/queue_depth.sh@30 -- # bdevperf_pid=85049 00:15:18.604 19:36:05 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:18.604 19:36:05 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:18.604 19:36:05 -- target/queue_depth.sh@33 -- # waitforlisten 85049 /var/tmp/bdevperf.sock 00:15:18.604 19:36:05 -- common/autotest_common.sh@829 -- # '[' -z 85049 ']' 00:15:18.604 19:36:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:18.604 19:36:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:18.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:18.604 19:36:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:18.604 19:36:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:18.604 19:36:05 -- common/autotest_common.sh@10 -- # set +x 00:15:18.862 [2024-12-15 19:36:05.510622] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:18.862 [2024-12-15 19:36:05.510738] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85049 ] 00:15:18.862 [2024-12-15 19:36:05.648798] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.862 [2024-12-15 19:36:05.733237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.795 19:36:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:19.795 19:36:06 -- common/autotest_common.sh@862 -- # return 0 00:15:19.795 19:36:06 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:19.795 19:36:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.795 19:36:06 -- common/autotest_common.sh@10 -- # set +x 00:15:19.795 NVMe0n1 00:15:19.795 19:36:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.795 19:36:06 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:19.795 Running I/O for 10 seconds... 00:15:31.997 00:15:31.997 Latency(us) 00:15:31.997 [2024-12-15T19:36:18.893Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:31.997 [2024-12-15T19:36:18.893Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:31.997 Verification LBA range: start 0x0 length 0x4000 00:15:31.997 NVMe0n1 : 10.05 17199.20 67.18 0.00 0.00 59358.05 11319.85 50760.61 00:15:31.997 [2024-12-15T19:36:18.893Z] =================================================================================================================== 00:15:31.997 [2024-12-15T19:36:18.893Z] Total : 17199.20 67.18 0.00 0.00 59358.05 11319.85 50760.61 00:15:31.997 0 00:15:31.997 19:36:16 -- target/queue_depth.sh@39 -- # killprocess 85049 00:15:31.997 19:36:16 -- common/autotest_common.sh@936 -- # '[' -z 85049 ']' 00:15:31.997 19:36:16 -- common/autotest_common.sh@940 -- # kill -0 85049 00:15:31.997 19:36:16 -- common/autotest_common.sh@941 -- # uname 00:15:31.997 19:36:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:31.997 19:36:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85049 00:15:31.997 19:36:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:31.997 19:36:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:31.997 killing process with pid 85049 00:15:31.997 19:36:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85049' 00:15:31.997 Received shutdown signal, test time was about 10.000000 seconds 00:15:31.997 00:15:31.997 Latency(us) 00:15:31.997 [2024-12-15T19:36:18.893Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:31.997 [2024-12-15T19:36:18.893Z] =================================================================================================================== 00:15:31.997 [2024-12-15T19:36:18.893Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:31.997 19:36:16 -- common/autotest_common.sh@955 -- # kill 85049 00:15:31.997 19:36:16 -- common/autotest_common.sh@960 -- # wait 85049 00:15:31.997 19:36:17 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:31.997 19:36:17 -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:31.997 19:36:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:31.997 19:36:17 -- nvmf/common.sh@116 -- # sync 00:15:31.997 19:36:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:31.997 19:36:17 -- nvmf/common.sh@119 -- # set +e 00:15:31.997 19:36:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:31.997 19:36:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:31.997 rmmod nvme_tcp 00:15:31.997 rmmod nvme_fabrics 00:15:31.997 rmmod nvme_keyring 00:15:31.997 19:36:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:31.997 19:36:17 -- nvmf/common.sh@123 -- # set -e 00:15:31.997 19:36:17 -- nvmf/common.sh@124 -- # return 0 00:15:31.997 19:36:17 -- nvmf/common.sh@477 -- # '[' -n 84999 ']' 00:15:31.997 19:36:17 -- nvmf/common.sh@478 -- # killprocess 84999 00:15:31.997 19:36:17 -- common/autotest_common.sh@936 -- # '[' -z 84999 ']' 00:15:31.997 19:36:17 -- common/autotest_common.sh@940 -- # kill -0 84999 00:15:31.997 19:36:17 -- common/autotest_common.sh@941 -- # uname 00:15:31.997 19:36:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:31.997 19:36:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84999 00:15:31.997 19:36:17 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:31.997 killing process with pid 84999 00:15:31.997 19:36:17 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:31.997 19:36:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84999' 00:15:31.997 19:36:17 -- common/autotest_common.sh@955 -- # kill 84999 00:15:31.997 19:36:17 -- common/autotest_common.sh@960 -- # wait 84999 00:15:31.997 19:36:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:31.997 19:36:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:31.997 19:36:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:31.997 19:36:17 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:31.997 19:36:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:31.997 19:36:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.997 19:36:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:31.997 19:36:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.997 19:36:17 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:31.997 00:15:31.997 real 0m13.768s 00:15:31.997 user 0m23.364s 00:15:31.997 sys 0m2.267s 00:15:31.997 19:36:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:31.997 ************************************ 00:15:31.997 END TEST nvmf_queue_depth 00:15:31.997 ************************************ 00:15:31.997 19:36:17 -- common/autotest_common.sh@10 -- # set +x 00:15:31.997 19:36:17 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:31.997 19:36:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:31.997 19:36:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:31.997 19:36:17 -- common/autotest_common.sh@10 -- # set +x 00:15:31.997 ************************************ 00:15:31.997 START TEST nvmf_multipath 00:15:31.997 ************************************ 00:15:31.997 19:36:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:31.997 * Looking for test storage... 00:15:31.997 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:31.997 19:36:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:31.997 19:36:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:31.997 19:36:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:31.997 19:36:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:31.997 19:36:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:31.997 19:36:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:31.997 19:36:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:31.997 19:36:17 -- scripts/common.sh@335 -- # IFS=.-: 00:15:31.997 19:36:17 -- scripts/common.sh@335 -- # read -ra ver1 00:15:31.997 19:36:17 -- scripts/common.sh@336 -- # IFS=.-: 00:15:31.997 19:36:17 -- scripts/common.sh@336 -- # read -ra ver2 00:15:31.997 19:36:17 -- scripts/common.sh@337 -- # local 'op=<' 00:15:31.997 19:36:17 -- scripts/common.sh@339 -- # ver1_l=2 00:15:31.997 19:36:17 -- scripts/common.sh@340 -- # ver2_l=1 00:15:31.997 19:36:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:31.997 19:36:17 -- scripts/common.sh@343 -- # case "$op" in 00:15:31.998 19:36:17 -- scripts/common.sh@344 -- # : 1 00:15:31.998 19:36:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:31.998 19:36:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:31.998 19:36:17 -- scripts/common.sh@364 -- # decimal 1 00:15:31.998 19:36:17 -- scripts/common.sh@352 -- # local d=1 00:15:31.998 19:36:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:31.998 19:36:17 -- scripts/common.sh@354 -- # echo 1 00:15:31.998 19:36:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:31.998 19:36:17 -- scripts/common.sh@365 -- # decimal 2 00:15:31.998 19:36:17 -- scripts/common.sh@352 -- # local d=2 00:15:31.998 19:36:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:31.998 19:36:17 -- scripts/common.sh@354 -- # echo 2 00:15:31.998 19:36:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:31.998 19:36:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:31.998 19:36:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:31.998 19:36:17 -- scripts/common.sh@367 -- # return 0 00:15:31.998 19:36:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:31.998 19:36:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:31.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.998 --rc genhtml_branch_coverage=1 00:15:31.998 --rc genhtml_function_coverage=1 00:15:31.998 --rc genhtml_legend=1 00:15:31.998 --rc geninfo_all_blocks=1 00:15:31.998 --rc geninfo_unexecuted_blocks=1 00:15:31.998 00:15:31.998 ' 00:15:31.998 19:36:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:31.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.998 --rc genhtml_branch_coverage=1 00:15:31.998 --rc genhtml_function_coverage=1 00:15:31.998 --rc genhtml_legend=1 00:15:31.998 --rc geninfo_all_blocks=1 00:15:31.998 --rc geninfo_unexecuted_blocks=1 00:15:31.998 00:15:31.998 ' 00:15:31.998 19:36:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:31.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.998 --rc genhtml_branch_coverage=1 00:15:31.998 --rc genhtml_function_coverage=1 00:15:31.998 --rc genhtml_legend=1 00:15:31.998 --rc geninfo_all_blocks=1 00:15:31.998 --rc geninfo_unexecuted_blocks=1 00:15:31.998 00:15:31.998 ' 00:15:31.998 19:36:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:31.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.998 --rc genhtml_branch_coverage=1 00:15:31.998 --rc genhtml_function_coverage=1 00:15:31.998 --rc genhtml_legend=1 00:15:31.998 --rc geninfo_all_blocks=1 00:15:31.998 --rc geninfo_unexecuted_blocks=1 00:15:31.998 00:15:31.998 ' 00:15:31.998 19:36:17 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:31.998 19:36:17 -- nvmf/common.sh@7 -- # uname -s 00:15:31.998 19:36:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:31.998 19:36:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:31.998 19:36:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:31.998 19:36:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:31.998 19:36:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:31.998 19:36:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:31.998 19:36:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:31.998 19:36:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:31.998 19:36:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:31.998 19:36:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:31.998 19:36:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:15:31.998 19:36:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:15:31.998 19:36:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:31.998 19:36:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:31.998 19:36:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:31.998 19:36:17 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:31.998 19:36:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:31.998 19:36:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:31.998 19:36:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:31.998 19:36:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.998 19:36:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.998 19:36:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.998 19:36:17 -- paths/export.sh@5 -- # export PATH 00:15:31.998 19:36:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.998 19:36:17 -- nvmf/common.sh@46 -- # : 0 00:15:31.998 19:36:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:31.998 19:36:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:31.998 19:36:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:31.998 19:36:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:31.998 19:36:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:31.998 19:36:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:31.998 19:36:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:31.998 19:36:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:31.998 19:36:17 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:31.998 19:36:17 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:31.998 19:36:17 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:31.998 19:36:17 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:31.998 19:36:17 -- target/multipath.sh@43 -- # nvmftestinit 00:15:31.998 19:36:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:31.998 19:36:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:31.998 19:36:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:31.998 19:36:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:31.998 19:36:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:31.998 19:36:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.998 19:36:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:31.998 19:36:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.998 19:36:17 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:31.998 19:36:17 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:31.998 19:36:17 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:31.998 19:36:17 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:31.998 19:36:17 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:31.998 19:36:17 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:31.998 19:36:17 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:31.998 19:36:17 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:31.998 19:36:17 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:31.998 19:36:17 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:31.998 19:36:17 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:31.998 19:36:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:31.998 19:36:17 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:31.998 19:36:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:31.998 19:36:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:31.998 19:36:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:31.998 19:36:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:31.998 19:36:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:31.998 19:36:17 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:31.998 19:36:17 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:31.998 Cannot find device "nvmf_tgt_br" 00:15:31.998 19:36:17 -- nvmf/common.sh@154 -- # true 00:15:31.998 19:36:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:31.998 Cannot find device "nvmf_tgt_br2" 00:15:31.998 19:36:17 -- nvmf/common.sh@155 -- # true 00:15:31.998 19:36:17 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:31.998 19:36:17 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:31.998 Cannot find device "nvmf_tgt_br" 00:15:31.998 19:36:17 -- nvmf/common.sh@157 -- # true 00:15:31.998 19:36:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:31.998 Cannot find device "nvmf_tgt_br2" 00:15:31.998 19:36:17 -- nvmf/common.sh@158 -- # true 00:15:31.998 19:36:17 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:31.998 19:36:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:31.998 19:36:17 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:31.998 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:31.998 19:36:17 -- nvmf/common.sh@161 -- # true 00:15:31.998 19:36:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:31.998 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:31.998 19:36:17 -- nvmf/common.sh@162 -- # true 00:15:31.998 19:36:17 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:31.998 19:36:17 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:31.998 19:36:17 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:31.998 19:36:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:31.998 19:36:17 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:31.998 19:36:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:31.998 19:36:17 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:31.998 19:36:17 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:31.999 19:36:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:31.999 19:36:17 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:31.999 19:36:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:31.999 19:36:17 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:31.999 19:36:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:31.999 19:36:17 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:31.999 19:36:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:31.999 19:36:18 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:31.999 19:36:18 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:31.999 19:36:18 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:31.999 19:36:18 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:31.999 19:36:18 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:31.999 19:36:18 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:31.999 19:36:18 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:31.999 19:36:18 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:31.999 19:36:18 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:31.999 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:31.999 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:15:31.999 00:15:31.999 --- 10.0.0.2 ping statistics --- 00:15:31.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.999 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:15:31.999 19:36:18 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:31.999 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:31.999 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:15:31.999 00:15:31.999 --- 10.0.0.3 ping statistics --- 00:15:31.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.999 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:15:31.999 19:36:18 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:31.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:31.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:31.999 00:15:31.999 --- 10.0.0.1 ping statistics --- 00:15:31.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.999 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:31.999 19:36:18 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:31.999 19:36:18 -- nvmf/common.sh@421 -- # return 0 00:15:31.999 19:36:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:31.999 19:36:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:31.999 19:36:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:31.999 19:36:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:31.999 19:36:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:31.999 19:36:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:31.999 19:36:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:31.999 19:36:18 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:15:31.999 19:36:18 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:15:31.999 19:36:18 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:15:31.999 19:36:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:31.999 19:36:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:31.999 19:36:18 -- common/autotest_common.sh@10 -- # set +x 00:15:31.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.999 19:36:18 -- nvmf/common.sh@469 -- # nvmfpid=85390 00:15:31.999 19:36:18 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:31.999 19:36:18 -- nvmf/common.sh@470 -- # waitforlisten 85390 00:15:31.999 19:36:18 -- common/autotest_common.sh@829 -- # '[' -z 85390 ']' 00:15:31.999 19:36:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.999 19:36:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:31.999 19:36:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.999 19:36:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:31.999 19:36:18 -- common/autotest_common.sh@10 -- # set +x 00:15:31.999 [2024-12-15 19:36:18.170619] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:31.999 [2024-12-15 19:36:18.170920] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:31.999 [2024-12-15 19:36:18.304307] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:31.999 [2024-12-15 19:36:18.385267] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:31.999 [2024-12-15 19:36:18.385771] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:31.999 [2024-12-15 19:36:18.385921] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:31.999 [2024-12-15 19:36:18.386063] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:31.999 [2024-12-15 19:36:18.386171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:31.999 [2024-12-15 19:36:18.386311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:31.999 [2024-12-15 19:36:18.387088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:31.999 [2024-12-15 19:36:18.387093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.258 19:36:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:32.258 19:36:19 -- common/autotest_common.sh@862 -- # return 0 00:15:32.258 19:36:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:32.258 19:36:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:32.258 19:36:19 -- common/autotest_common.sh@10 -- # set +x 00:15:32.516 19:36:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:32.516 19:36:19 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:32.516 [2024-12-15 19:36:19.387636] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:32.775 19:36:19 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:33.033 Malloc0 00:15:33.033 19:36:19 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:15:33.292 19:36:20 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:33.551 19:36:20 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:33.809 [2024-12-15 19:36:20.480537] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:33.809 19:36:20 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:33.809 [2024-12-15 19:36:20.700658] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:34.068 19:36:20 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 --hostid=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:15:34.068 19:36:20 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 --hostid=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:15:34.326 19:36:21 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:15:34.326 19:36:21 -- common/autotest_common.sh@1187 -- # local i=0 00:15:34.326 19:36:21 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:34.326 19:36:21 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:34.326 19:36:21 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:36.860 19:36:23 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:36.861 19:36:23 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:36.861 19:36:23 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:36.861 19:36:23 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:36.861 19:36:23 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:36.861 19:36:23 -- common/autotest_common.sh@1197 -- # return 0 00:15:36.861 19:36:23 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:15:36.861 19:36:23 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:15:36.861 19:36:23 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:15:36.861 19:36:23 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:15:36.861 19:36:23 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:15:36.861 19:36:23 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:15:36.861 19:36:23 -- target/multipath.sh@38 -- # return 0 00:15:36.861 19:36:23 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:15:36.861 19:36:23 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:15:36.861 19:36:23 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:15:36.861 19:36:23 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:15:36.861 19:36:23 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:15:36.861 19:36:23 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:15:36.861 19:36:23 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:15:36.861 19:36:23 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:36.861 19:36:23 -- target/multipath.sh@22 -- # local timeout=20 00:15:36.861 19:36:23 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:36.861 19:36:23 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:36.861 19:36:23 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:36.861 19:36:23 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:15:36.861 19:36:23 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:36.861 19:36:23 -- target/multipath.sh@22 -- # local timeout=20 00:15:36.861 19:36:23 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:36.861 19:36:23 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:36.861 19:36:23 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:36.861 19:36:23 -- target/multipath.sh@85 -- # echo numa 00:15:36.861 19:36:23 -- target/multipath.sh@88 -- # fio_pid=85529 00:15:36.861 19:36:23 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:36.861 19:36:23 -- target/multipath.sh@90 -- # sleep 1 00:15:36.861 [global] 00:15:36.861 thread=1 00:15:36.861 invalidate=1 00:15:36.861 rw=randrw 00:15:36.861 time_based=1 00:15:36.861 runtime=6 00:15:36.861 ioengine=libaio 00:15:36.861 direct=1 00:15:36.861 bs=4096 00:15:36.861 iodepth=128 00:15:36.861 norandommap=0 00:15:36.861 numjobs=1 00:15:36.861 00:15:36.861 verify_dump=1 00:15:36.861 verify_backlog=512 00:15:36.861 verify_state_save=0 00:15:36.861 do_verify=1 00:15:36.861 verify=crc32c-intel 00:15:36.861 [job0] 00:15:36.861 filename=/dev/nvme0n1 00:15:36.861 Could not set queue depth (nvme0n1) 00:15:36.861 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:36.861 fio-3.35 00:15:36.861 Starting 1 thread 00:15:37.428 19:36:24 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:37.687 19:36:24 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:37.945 19:36:24 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:15:37.945 19:36:24 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:37.945 19:36:24 -- target/multipath.sh@22 -- # local timeout=20 00:15:37.945 19:36:24 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:37.945 19:36:24 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:37.945 19:36:24 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:37.945 19:36:24 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:15:37.945 19:36:24 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:37.945 19:36:24 -- target/multipath.sh@22 -- # local timeout=20 00:15:37.945 19:36:24 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:37.945 19:36:24 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:37.945 19:36:24 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:37.945 19:36:24 -- target/multipath.sh@25 -- # sleep 1s 00:15:38.881 19:36:25 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:38.881 19:36:25 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:38.881 19:36:25 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:38.881 19:36:25 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:39.139 19:36:25 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:39.397 19:36:26 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:15:39.397 19:36:26 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:39.397 19:36:26 -- target/multipath.sh@22 -- # local timeout=20 00:15:39.397 19:36:26 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:39.398 19:36:26 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:39.398 19:36:26 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:39.398 19:36:26 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:15:39.398 19:36:26 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:39.398 19:36:26 -- target/multipath.sh@22 -- # local timeout=20 00:15:39.398 19:36:26 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:39.398 19:36:26 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:39.398 19:36:26 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:39.398 19:36:26 -- target/multipath.sh@25 -- # sleep 1s 00:15:40.774 19:36:27 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:40.774 19:36:27 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:40.774 19:36:27 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:40.774 19:36:27 -- target/multipath.sh@104 -- # wait 85529 00:15:42.679 00:15:42.679 job0: (groupid=0, jobs=1): err= 0: pid=85556: Sun Dec 15 19:36:29 2024 00:15:42.679 read: IOPS=13.0k, BW=50.7MiB/s (53.2MB/s)(305MiB/6006msec) 00:15:42.679 slat (usec): min=3, max=6500, avg=45.47, stdev=203.31 00:15:42.679 clat (usec): min=1240, max=15258, avg=6798.62, stdev=1027.49 00:15:42.679 lat (usec): min=1261, max=15268, avg=6844.09, stdev=1036.46 00:15:42.679 clat percentiles (usec): 00:15:42.679 | 1.00th=[ 4359], 5.00th=[ 5473], 10.00th=[ 5800], 20.00th=[ 6063], 00:15:42.679 | 30.00th=[ 6194], 40.00th=[ 6390], 50.00th=[ 6652], 60.00th=[ 6980], 00:15:42.679 | 70.00th=[ 7242], 80.00th=[ 7570], 90.00th=[ 7963], 95.00th=[ 8586], 00:15:42.679 | 99.00th=[10028], 99.50th=[10421], 99.90th=[11338], 99.95th=[12256], 00:15:42.679 | 99.99th=[13435] 00:15:42.679 bw ( KiB/s): min=13112, max=33224, per=52.61%, avg=27327.27, stdev=7183.95, samples=11 00:15:42.679 iops : min= 3278, max= 8306, avg=6831.82, stdev=1795.99, samples=11 00:15:42.679 write: IOPS=7691, BW=30.0MiB/s (31.5MB/s)(153MiB/5083msec); 0 zone resets 00:15:42.679 slat (usec): min=5, max=2157, avg=54.84, stdev=146.27 00:15:42.679 clat (usec): min=1813, max=11473, avg=5920.51, stdev=823.17 00:15:42.679 lat (usec): min=1840, max=12601, avg=5975.35, stdev=826.71 00:15:42.679 clat percentiles (usec): 00:15:42.679 | 1.00th=[ 3490], 5.00th=[ 4621], 10.00th=[ 5080], 20.00th=[ 5407], 00:15:42.679 | 30.00th=[ 5604], 40.00th=[ 5800], 50.00th=[ 5932], 60.00th=[ 6128], 00:15:42.679 | 70.00th=[ 6259], 80.00th=[ 6456], 90.00th=[ 6718], 95.00th=[ 7046], 00:15:42.679 | 99.00th=[ 8455], 99.50th=[ 9110], 99.90th=[10552], 99.95th=[11076], 00:15:42.679 | 99.99th=[11338] 00:15:42.679 bw ( KiB/s): min=13352, max=32520, per=88.78%, avg=27312.73, stdev=6981.23, samples=11 00:15:42.679 iops : min= 3338, max= 8130, avg=6828.18, stdev=1745.31, samples=11 00:15:42.679 lat (msec) : 2=0.03%, 4=1.25%, 10=98.00%, 20=0.72% 00:15:42.679 cpu : usr=5.66%, sys=20.33%, ctx=6992, majf=0, minf=127 00:15:42.679 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:15:42.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:42.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:42.679 issued rwts: total=77995,39094,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:42.679 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:42.679 00:15:42.679 Run status group 0 (all jobs): 00:15:42.679 READ: bw=50.7MiB/s (53.2MB/s), 50.7MiB/s-50.7MiB/s (53.2MB/s-53.2MB/s), io=305MiB (319MB), run=6006-6006msec 00:15:42.679 WRITE: bw=30.0MiB/s (31.5MB/s), 30.0MiB/s-30.0MiB/s (31.5MB/s-31.5MB/s), io=153MiB (160MB), run=5083-5083msec 00:15:42.679 00:15:42.679 Disk stats (read/write): 00:15:42.679 nvme0n1: ios=76585/38917, merge=0/0, ticks=489339/216475, in_queue=705814, util=98.65% 00:15:42.679 19:36:29 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:42.937 19:36:29 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:15:43.504 19:36:30 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:15:43.504 19:36:30 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:43.504 19:36:30 -- target/multipath.sh@22 -- # local timeout=20 00:15:43.504 19:36:30 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:43.504 19:36:30 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:43.504 19:36:30 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:43.504 19:36:30 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:15:43.504 19:36:30 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:43.504 19:36:30 -- target/multipath.sh@22 -- # local timeout=20 00:15:43.504 19:36:30 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:43.504 19:36:30 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:43.504 19:36:30 -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:15:43.504 19:36:30 -- target/multipath.sh@25 -- # sleep 1s 00:15:44.468 19:36:31 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:44.468 19:36:31 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:44.468 19:36:31 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:44.468 19:36:31 -- target/multipath.sh@113 -- # echo round-robin 00:15:44.468 19:36:31 -- target/multipath.sh@116 -- # fio_pid=85681 00:15:44.468 19:36:31 -- target/multipath.sh@118 -- # sleep 1 00:15:44.468 19:36:31 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:44.468 [global] 00:15:44.468 thread=1 00:15:44.468 invalidate=1 00:15:44.468 rw=randrw 00:15:44.468 time_based=1 00:15:44.468 runtime=6 00:15:44.468 ioengine=libaio 00:15:44.468 direct=1 00:15:44.468 bs=4096 00:15:44.468 iodepth=128 00:15:44.468 norandommap=0 00:15:44.468 numjobs=1 00:15:44.468 00:15:44.468 verify_dump=1 00:15:44.468 verify_backlog=512 00:15:44.468 verify_state_save=0 00:15:44.468 do_verify=1 00:15:44.468 verify=crc32c-intel 00:15:44.468 [job0] 00:15:44.468 filename=/dev/nvme0n1 00:15:44.468 Could not set queue depth (nvme0n1) 00:15:44.468 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:44.468 fio-3.35 00:15:44.468 Starting 1 thread 00:15:45.403 19:36:32 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:45.661 19:36:32 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:45.919 19:36:32 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:15:45.919 19:36:32 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:45.919 19:36:32 -- target/multipath.sh@22 -- # local timeout=20 00:15:45.919 19:36:32 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:45.919 19:36:32 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:45.919 19:36:32 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:45.919 19:36:32 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:15:45.919 19:36:32 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:45.919 19:36:32 -- target/multipath.sh@22 -- # local timeout=20 00:15:45.919 19:36:32 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:45.919 19:36:32 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:45.919 19:36:32 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:45.919 19:36:32 -- target/multipath.sh@25 -- # sleep 1s 00:15:46.853 19:36:33 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:46.853 19:36:33 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:46.853 19:36:33 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:46.853 19:36:33 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:47.420 19:36:34 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:47.420 19:36:34 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:15:47.420 19:36:34 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:47.420 19:36:34 -- target/multipath.sh@22 -- # local timeout=20 00:15:47.420 19:36:34 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:47.420 19:36:34 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:47.420 19:36:34 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:47.420 19:36:34 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:15:47.420 19:36:34 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:47.420 19:36:34 -- target/multipath.sh@22 -- # local timeout=20 00:15:47.420 19:36:34 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:47.420 19:36:34 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:47.420 19:36:34 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:47.420 19:36:34 -- target/multipath.sh@25 -- # sleep 1s 00:15:48.794 19:36:35 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:48.794 19:36:35 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:48.794 19:36:35 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:48.794 19:36:35 -- target/multipath.sh@132 -- # wait 85681 00:15:50.694 00:15:50.694 job0: (groupid=0, jobs=1): err= 0: pid=85702: Sun Dec 15 19:36:37 2024 00:15:50.694 read: IOPS=14.2k, BW=55.4MiB/s (58.1MB/s)(333MiB/6004msec) 00:15:50.694 slat (usec): min=2, max=4576, avg=36.58, stdev=174.01 00:15:50.694 clat (usec): min=601, max=12636, avg=6252.27, stdev=1464.40 00:15:50.694 lat (usec): min=776, max=12659, avg=6288.85, stdev=1476.97 00:15:50.694 clat percentiles (usec): 00:15:50.694 | 1.00th=[ 2507], 5.00th=[ 3654], 10.00th=[ 4228], 20.00th=[ 5080], 00:15:50.694 | 30.00th=[ 5735], 40.00th=[ 6128], 50.00th=[ 6325], 60.00th=[ 6652], 00:15:50.694 | 70.00th=[ 6980], 80.00th=[ 7373], 90.00th=[ 7898], 95.00th=[ 8455], 00:15:50.694 | 99.00th=[10028], 99.50th=[10421], 99.90th=[11600], 99.95th=[11994], 00:15:50.694 | 99.99th=[12387] 00:15:50.694 bw ( KiB/s): min=14400, max=47440, per=51.91%, avg=29458.18, stdev=10614.17, samples=11 00:15:50.694 iops : min= 3600, max=11860, avg=7364.55, stdev=2653.54, samples=11 00:15:50.694 write: IOPS=8481, BW=33.1MiB/s (34.7MB/s)(172MiB/5190msec); 0 zone resets 00:15:50.694 slat (usec): min=10, max=4254, avg=47.25, stdev=113.94 00:15:50.694 clat (usec): min=174, max=11250, avg=5208.10, stdev=1409.06 00:15:50.695 lat (usec): min=287, max=11269, avg=5255.35, stdev=1418.51 00:15:50.695 clat percentiles (usec): 00:15:50.695 | 1.00th=[ 2245], 5.00th=[ 2769], 10.00th=[ 3130], 20.00th=[ 3720], 00:15:50.695 | 30.00th=[ 4424], 40.00th=[ 5211], 50.00th=[ 5604], 60.00th=[ 5866], 00:15:50.695 | 70.00th=[ 6063], 80.00th=[ 6325], 90.00th=[ 6652], 95.00th=[ 6980], 00:15:50.695 | 99.00th=[ 8586], 99.50th=[ 9110], 99.90th=[10421], 99.95th=[10683], 00:15:50.695 | 99.99th=[11207] 00:15:50.695 bw ( KiB/s): min=15184, max=48176, per=87.06%, avg=29536.73, stdev=10238.78, samples=11 00:15:50.695 iops : min= 3796, max=12044, avg=7384.18, stdev=2559.70, samples=11 00:15:50.695 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:15:50.695 lat (msec) : 2=0.39%, 4=13.12%, 10=85.70%, 20=0.76% 00:15:50.695 cpu : usr=6.21%, sys=27.03%, ctx=8729, majf=0, minf=114 00:15:50.695 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:15:50.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:50.695 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:50.695 issued rwts: total=85186,44020,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:50.695 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:50.695 00:15:50.695 Run status group 0 (all jobs): 00:15:50.695 READ: bw=55.4MiB/s (58.1MB/s), 55.4MiB/s-55.4MiB/s (58.1MB/s-58.1MB/s), io=333MiB (349MB), run=6004-6004msec 00:15:50.695 WRITE: bw=33.1MiB/s (34.7MB/s), 33.1MiB/s-33.1MiB/s (34.7MB/s-34.7MB/s), io=172MiB (180MB), run=5190-5190msec 00:15:50.695 00:15:50.695 Disk stats (read/write): 00:15:50.695 nvme0n1: ios=84259/43273, merge=0/0, ticks=482733/202463, in_queue=685196, util=98.62% 00:15:50.695 19:36:37 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:50.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:50.695 19:36:37 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:50.695 19:36:37 -- common/autotest_common.sh@1208 -- # local i=0 00:15:50.695 19:36:37 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:50.695 19:36:37 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:50.695 19:36:37 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:50.695 19:36:37 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:50.695 19:36:37 -- common/autotest_common.sh@1220 -- # return 0 00:15:50.695 19:36:37 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:50.953 19:36:37 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:15:50.953 19:36:37 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:15:50.953 19:36:37 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:15:50.953 19:36:37 -- target/multipath.sh@144 -- # nvmftestfini 00:15:50.953 19:36:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:50.953 19:36:37 -- nvmf/common.sh@116 -- # sync 00:15:50.953 19:36:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:50.953 19:36:37 -- nvmf/common.sh@119 -- # set +e 00:15:50.953 19:36:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:50.953 19:36:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:50.953 rmmod nvme_tcp 00:15:50.953 rmmod nvme_fabrics 00:15:50.953 rmmod nvme_keyring 00:15:51.211 19:36:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:51.211 19:36:37 -- nvmf/common.sh@123 -- # set -e 00:15:51.211 19:36:37 -- nvmf/common.sh@124 -- # return 0 00:15:51.211 19:36:37 -- nvmf/common.sh@477 -- # '[' -n 85390 ']' 00:15:51.211 19:36:37 -- nvmf/common.sh@478 -- # killprocess 85390 00:15:51.211 19:36:37 -- common/autotest_common.sh@936 -- # '[' -z 85390 ']' 00:15:51.211 19:36:37 -- common/autotest_common.sh@940 -- # kill -0 85390 00:15:51.211 19:36:37 -- common/autotest_common.sh@941 -- # uname 00:15:51.211 19:36:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:51.211 19:36:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85390 00:15:51.211 killing process with pid 85390 00:15:51.211 19:36:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:51.211 19:36:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:51.211 19:36:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85390' 00:15:51.211 19:36:37 -- common/autotest_common.sh@955 -- # kill 85390 00:15:51.211 19:36:37 -- common/autotest_common.sh@960 -- # wait 85390 00:15:51.470 19:36:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:51.470 19:36:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:51.470 19:36:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:51.470 19:36:38 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:51.470 19:36:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:51.470 19:36:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.470 19:36:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:51.470 19:36:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.470 19:36:38 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:51.470 ************************************ 00:15:51.470 END TEST nvmf_multipath 00:15:51.470 ************************************ 00:15:51.470 00:15:51.470 real 0m20.644s 00:15:51.470 user 1m20.198s 00:15:51.470 sys 0m6.958s 00:15:51.470 19:36:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:51.470 19:36:38 -- common/autotest_common.sh@10 -- # set +x 00:15:51.470 19:36:38 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:51.470 19:36:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:51.470 19:36:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:51.470 19:36:38 -- common/autotest_common.sh@10 -- # set +x 00:15:51.470 ************************************ 00:15:51.470 START TEST nvmf_zcopy 00:15:51.470 ************************************ 00:15:51.470 19:36:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:51.470 * Looking for test storage... 00:15:51.470 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:51.470 19:36:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:51.470 19:36:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:51.470 19:36:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:51.729 19:36:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:51.729 19:36:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:51.729 19:36:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:51.729 19:36:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:51.729 19:36:38 -- scripts/common.sh@335 -- # IFS=.-: 00:15:51.729 19:36:38 -- scripts/common.sh@335 -- # read -ra ver1 00:15:51.729 19:36:38 -- scripts/common.sh@336 -- # IFS=.-: 00:15:51.729 19:36:38 -- scripts/common.sh@336 -- # read -ra ver2 00:15:51.729 19:36:38 -- scripts/common.sh@337 -- # local 'op=<' 00:15:51.729 19:36:38 -- scripts/common.sh@339 -- # ver1_l=2 00:15:51.729 19:36:38 -- scripts/common.sh@340 -- # ver2_l=1 00:15:51.729 19:36:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:51.729 19:36:38 -- scripts/common.sh@343 -- # case "$op" in 00:15:51.729 19:36:38 -- scripts/common.sh@344 -- # : 1 00:15:51.729 19:36:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:51.729 19:36:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:51.729 19:36:38 -- scripts/common.sh@364 -- # decimal 1 00:15:51.729 19:36:38 -- scripts/common.sh@352 -- # local d=1 00:15:51.729 19:36:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:51.729 19:36:38 -- scripts/common.sh@354 -- # echo 1 00:15:51.729 19:36:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:51.729 19:36:38 -- scripts/common.sh@365 -- # decimal 2 00:15:51.729 19:36:38 -- scripts/common.sh@352 -- # local d=2 00:15:51.729 19:36:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:51.729 19:36:38 -- scripts/common.sh@354 -- # echo 2 00:15:51.729 19:36:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:51.729 19:36:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:51.729 19:36:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:51.729 19:36:38 -- scripts/common.sh@367 -- # return 0 00:15:51.729 19:36:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:51.729 19:36:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:51.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.729 --rc genhtml_branch_coverage=1 00:15:51.729 --rc genhtml_function_coverage=1 00:15:51.729 --rc genhtml_legend=1 00:15:51.729 --rc geninfo_all_blocks=1 00:15:51.729 --rc geninfo_unexecuted_blocks=1 00:15:51.729 00:15:51.729 ' 00:15:51.729 19:36:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:51.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.729 --rc genhtml_branch_coverage=1 00:15:51.729 --rc genhtml_function_coverage=1 00:15:51.729 --rc genhtml_legend=1 00:15:51.729 --rc geninfo_all_blocks=1 00:15:51.729 --rc geninfo_unexecuted_blocks=1 00:15:51.729 00:15:51.729 ' 00:15:51.729 19:36:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:51.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.729 --rc genhtml_branch_coverage=1 00:15:51.729 --rc genhtml_function_coverage=1 00:15:51.729 --rc genhtml_legend=1 00:15:51.729 --rc geninfo_all_blocks=1 00:15:51.729 --rc geninfo_unexecuted_blocks=1 00:15:51.729 00:15:51.729 ' 00:15:51.729 19:36:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:51.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.729 --rc genhtml_branch_coverage=1 00:15:51.729 --rc genhtml_function_coverage=1 00:15:51.729 --rc genhtml_legend=1 00:15:51.729 --rc geninfo_all_blocks=1 00:15:51.729 --rc geninfo_unexecuted_blocks=1 00:15:51.729 00:15:51.729 ' 00:15:51.729 19:36:38 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:51.729 19:36:38 -- nvmf/common.sh@7 -- # uname -s 00:15:51.729 19:36:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:51.729 19:36:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:51.729 19:36:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:51.729 19:36:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:51.729 19:36:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:51.729 19:36:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:51.729 19:36:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:51.729 19:36:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:51.729 19:36:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:51.729 19:36:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:51.729 19:36:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:15:51.729 19:36:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:15:51.729 19:36:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:51.729 19:36:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:51.729 19:36:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:51.729 19:36:38 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:51.729 19:36:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:51.729 19:36:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:51.729 19:36:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:51.729 19:36:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.729 19:36:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.729 19:36:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.729 19:36:38 -- paths/export.sh@5 -- # export PATH 00:15:51.729 19:36:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.729 19:36:38 -- nvmf/common.sh@46 -- # : 0 00:15:51.729 19:36:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:51.729 19:36:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:51.730 19:36:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:51.730 19:36:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:51.730 19:36:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:51.730 19:36:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:51.730 19:36:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:51.730 19:36:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:51.730 19:36:38 -- target/zcopy.sh@12 -- # nvmftestinit 00:15:51.730 19:36:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:51.730 19:36:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:51.730 19:36:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:51.730 19:36:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:51.730 19:36:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:51.730 19:36:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.730 19:36:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:51.730 19:36:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.730 19:36:38 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:51.730 19:36:38 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:51.730 19:36:38 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:51.730 19:36:38 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:51.730 19:36:38 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:51.730 19:36:38 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:51.730 19:36:38 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:51.730 19:36:38 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:51.730 19:36:38 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:51.730 19:36:38 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:51.730 19:36:38 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:51.730 19:36:38 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:51.730 19:36:38 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:51.730 19:36:38 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:51.730 19:36:38 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:51.730 19:36:38 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:51.730 19:36:38 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:51.730 19:36:38 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:51.730 19:36:38 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:51.730 19:36:38 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:51.730 Cannot find device "nvmf_tgt_br" 00:15:51.730 19:36:38 -- nvmf/common.sh@154 -- # true 00:15:51.730 19:36:38 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:51.730 Cannot find device "nvmf_tgt_br2" 00:15:51.730 19:36:38 -- nvmf/common.sh@155 -- # true 00:15:51.730 19:36:38 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:51.730 19:36:38 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:51.730 Cannot find device "nvmf_tgt_br" 00:15:51.730 19:36:38 -- nvmf/common.sh@157 -- # true 00:15:51.730 19:36:38 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:51.730 Cannot find device "nvmf_tgt_br2" 00:15:51.730 19:36:38 -- nvmf/common.sh@158 -- # true 00:15:51.730 19:36:38 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:51.730 19:36:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:51.988 19:36:38 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:51.988 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:51.988 19:36:38 -- nvmf/common.sh@161 -- # true 00:15:51.988 19:36:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:51.988 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:51.988 19:36:38 -- nvmf/common.sh@162 -- # true 00:15:51.988 19:36:38 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:51.988 19:36:38 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:51.988 19:36:38 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:51.988 19:36:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:51.988 19:36:38 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:51.988 19:36:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:51.988 19:36:38 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:51.988 19:36:38 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:51.988 19:36:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:51.988 19:36:38 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:51.988 19:36:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:51.988 19:36:38 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:51.988 19:36:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:51.988 19:36:38 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:51.989 19:36:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:51.989 19:36:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:51.989 19:36:38 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:51.989 19:36:38 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:51.989 19:36:38 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:51.989 19:36:38 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:51.989 19:36:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:51.989 19:36:38 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:51.989 19:36:38 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:51.989 19:36:38 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:51.989 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:51.989 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.363 ms 00:15:51.989 00:15:51.989 --- 10.0.0.2 ping statistics --- 00:15:51.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.989 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:15:51.989 19:36:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:51.989 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:51.989 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:15:51.989 00:15:51.989 --- 10.0.0.3 ping statistics --- 00:15:51.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.989 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:15:51.989 19:36:38 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:51.989 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:51.989 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:15:51.989 00:15:51.989 --- 10.0.0.1 ping statistics --- 00:15:51.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.989 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:15:51.989 19:36:38 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:51.989 19:36:38 -- nvmf/common.sh@421 -- # return 0 00:15:51.989 19:36:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:51.989 19:36:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:51.989 19:36:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:51.989 19:36:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:51.989 19:36:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:51.989 19:36:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:51.989 19:36:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:51.989 19:36:38 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:51.989 19:36:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:51.989 19:36:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:51.989 19:36:38 -- common/autotest_common.sh@10 -- # set +x 00:15:51.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.989 19:36:38 -- nvmf/common.sh@469 -- # nvmfpid=85997 00:15:51.989 19:36:38 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:51.989 19:36:38 -- nvmf/common.sh@470 -- # waitforlisten 85997 00:15:51.989 19:36:38 -- common/autotest_common.sh@829 -- # '[' -z 85997 ']' 00:15:51.989 19:36:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.989 19:36:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:51.989 19:36:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.989 19:36:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:51.989 19:36:38 -- common/autotest_common.sh@10 -- # set +x 00:15:52.247 [2024-12-15 19:36:38.921478] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:52.247 [2024-12-15 19:36:38.921841] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:52.247 [2024-12-15 19:36:39.062363] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.247 [2024-12-15 19:36:39.129415] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:52.247 [2024-12-15 19:36:39.129886] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:52.247 [2024-12-15 19:36:39.129908] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:52.247 [2024-12-15 19:36:39.129917] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:52.247 [2024-12-15 19:36:39.129956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:53.182 19:36:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:53.182 19:36:39 -- common/autotest_common.sh@862 -- # return 0 00:15:53.182 19:36:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:53.182 19:36:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:53.182 19:36:39 -- common/autotest_common.sh@10 -- # set +x 00:15:53.182 19:36:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:53.182 19:36:39 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:53.182 19:36:39 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:53.182 19:36:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.182 19:36:39 -- common/autotest_common.sh@10 -- # set +x 00:15:53.182 [2024-12-15 19:36:39.901258] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:53.182 19:36:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.182 19:36:39 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:53.182 19:36:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.182 19:36:39 -- common/autotest_common.sh@10 -- # set +x 00:15:53.182 19:36:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.182 19:36:39 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:53.182 19:36:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.182 19:36:39 -- common/autotest_common.sh@10 -- # set +x 00:15:53.182 [2024-12-15 19:36:39.917317] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:53.182 19:36:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.182 19:36:39 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:53.182 19:36:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.182 19:36:39 -- common/autotest_common.sh@10 -- # set +x 00:15:53.182 19:36:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.182 19:36:39 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:53.182 19:36:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.182 19:36:39 -- common/autotest_common.sh@10 -- # set +x 00:15:53.182 malloc0 00:15:53.182 19:36:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.182 19:36:39 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:53.182 19:36:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.182 19:36:39 -- common/autotest_common.sh@10 -- # set +x 00:15:53.182 19:36:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.182 19:36:39 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:53.182 19:36:39 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:53.182 19:36:39 -- nvmf/common.sh@520 -- # config=() 00:15:53.182 19:36:39 -- nvmf/common.sh@520 -- # local subsystem config 00:15:53.182 19:36:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:53.182 19:36:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:53.182 { 00:15:53.182 "params": { 00:15:53.182 "name": "Nvme$subsystem", 00:15:53.182 "trtype": "$TEST_TRANSPORT", 00:15:53.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:53.182 "adrfam": "ipv4", 00:15:53.182 "trsvcid": "$NVMF_PORT", 00:15:53.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:53.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:53.183 "hdgst": ${hdgst:-false}, 00:15:53.183 "ddgst": ${ddgst:-false} 00:15:53.183 }, 00:15:53.183 "method": "bdev_nvme_attach_controller" 00:15:53.183 } 00:15:53.183 EOF 00:15:53.183 )") 00:15:53.183 19:36:39 -- nvmf/common.sh@542 -- # cat 00:15:53.183 19:36:39 -- nvmf/common.sh@544 -- # jq . 00:15:53.183 19:36:39 -- nvmf/common.sh@545 -- # IFS=, 00:15:53.183 19:36:39 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:53.183 "params": { 00:15:53.183 "name": "Nvme1", 00:15:53.183 "trtype": "tcp", 00:15:53.183 "traddr": "10.0.0.2", 00:15:53.183 "adrfam": "ipv4", 00:15:53.183 "trsvcid": "4420", 00:15:53.183 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:53.183 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:53.183 "hdgst": false, 00:15:53.183 "ddgst": false 00:15:53.183 }, 00:15:53.183 "method": "bdev_nvme_attach_controller" 00:15:53.183 }' 00:15:53.183 [2024-12-15 19:36:40.001810] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:53.183 [2024-12-15 19:36:40.001932] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86049 ] 00:15:53.441 [2024-12-15 19:36:40.138736] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.441 [2024-12-15 19:36:40.217061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.699 Running I/O for 10 seconds... 00:16:03.671 00:16:03.671 Latency(us) 00:16:03.671 [2024-12-15T19:36:50.567Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:03.671 [2024-12-15T19:36:50.567Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:03.671 Verification LBA range: start 0x0 length 0x1000 00:16:03.671 Nvme1n1 : 10.01 11468.47 89.60 0.00 0.00 11133.53 1094.75 18588.39 00:16:03.671 [2024-12-15T19:36:50.567Z] =================================================================================================================== 00:16:03.671 [2024-12-15T19:36:50.567Z] Total : 11468.47 89.60 0.00 0.00 11133.53 1094.75 18588.39 00:16:03.930 19:36:50 -- target/zcopy.sh@39 -- # perfpid=86161 00:16:03.930 19:36:50 -- target/zcopy.sh@41 -- # xtrace_disable 00:16:03.930 19:36:50 -- common/autotest_common.sh@10 -- # set +x 00:16:03.930 19:36:50 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:03.930 19:36:50 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:03.930 19:36:50 -- nvmf/common.sh@520 -- # config=() 00:16:03.930 19:36:50 -- nvmf/common.sh@520 -- # local subsystem config 00:16:03.930 19:36:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:03.930 [2024-12-15 19:36:50.690158] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.930 [2024-12-15 19:36:50.690228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.930 19:36:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:03.930 { 00:16:03.930 "params": { 00:16:03.930 "name": "Nvme$subsystem", 00:16:03.930 "trtype": "$TEST_TRANSPORT", 00:16:03.930 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:03.930 "adrfam": "ipv4", 00:16:03.930 "trsvcid": "$NVMF_PORT", 00:16:03.930 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:03.930 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:03.930 "hdgst": ${hdgst:-false}, 00:16:03.930 "ddgst": ${ddgst:-false} 00:16:03.930 }, 00:16:03.930 "method": "bdev_nvme_attach_controller" 00:16:03.930 } 00:16:03.930 EOF 00:16:03.930 )") 00:16:03.930 19:36:50 -- nvmf/common.sh@542 -- # cat 00:16:03.930 2024/12/15 19:36:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.930 [2024-12-15 19:36:50.698072] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.930 [2024-12-15 19:36:50.698101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.930 19:36:50 -- nvmf/common.sh@544 -- # jq . 00:16:03.930 19:36:50 -- nvmf/common.sh@545 -- # IFS=, 00:16:03.930 19:36:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:03.930 "params": { 00:16:03.930 "name": "Nvme1", 00:16:03.930 "trtype": "tcp", 00:16:03.930 "traddr": "10.0.0.2", 00:16:03.930 "adrfam": "ipv4", 00:16:03.930 "trsvcid": "4420", 00:16:03.930 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:03.930 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:03.930 "hdgst": false, 00:16:03.930 "ddgst": false 00:16:03.930 }, 00:16:03.930 "method": "bdev_nvme_attach_controller" 00:16:03.930 }' 00:16:03.930 2024/12/15 19:36:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.930 [2024-12-15 19:36:50.710071] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.930 [2024-12-15 19:36:50.710098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.930 2024/12/15 19:36:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.930 [2024-12-15 19:36:50.718067] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.930 [2024-12-15 19:36:50.718279] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.930 2024/12/15 19:36:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.930 [2024-12-15 19:36:50.726074] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.930 [2024-12-15 19:36:50.726101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.930 2024/12/15 19:36:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.930 [2024-12-15 19:36:50.734072] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.930 [2024-12-15 19:36:50.734096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.930 2024/12/15 19:36:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.930 [2024-12-15 19:36:50.740607] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:03.930 [2024-12-15 19:36:50.740715] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86161 ] 00:16:03.930 [2024-12-15 19:36:50.742073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.930 [2024-12-15 19:36:50.742098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.930 2024/12/15 19:36:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.930 [2024-12-15 19:36:50.750078] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.930 [2024-12-15 19:36:50.750103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.931 2024/12/15 19:36:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.931 [2024-12-15 19:36:50.758085] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.931 [2024-12-15 19:36:50.758110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.931 2024/12/15 19:36:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.931 [2024-12-15 19:36:50.766101] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.931 [2024-12-15 19:36:50.766124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.931 2024/12/15 19:36:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.931 [2024-12-15 19:36:50.774087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.931 [2024-12-15 19:36:50.774111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.931 2024/12/15 19:36:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.931 [2024-12-15 19:36:50.782102] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.931 [2024-12-15 19:36:50.782125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.931 2024/12/15 19:36:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.931 [2024-12-15 19:36:50.790101] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.931 [2024-12-15 19:36:50.790124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.931 2024/12/15 19:36:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.931 [2024-12-15 19:36:50.798105] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.931 [2024-12-15 19:36:50.798145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.931 2024/12/15 19:36:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.931 [2024-12-15 19:36:50.806109] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.931 [2024-12-15 19:36:50.806147] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.931 2024/12/15 19:36:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.931 [2024-12-15 19:36:50.814110] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.931 [2024-12-15 19:36:50.814148] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.931 2024/12/15 19:36:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.190 [2024-12-15 19:36:50.826106] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.190 [2024-12-15 19:36:50.826130] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.190 2024/12/15 19:36:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.190 [2024-12-15 19:36:50.834108] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.190 [2024-12-15 19:36:50.834132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.190 2024/12/15 19:36:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.190 [2024-12-15 19:36:50.842106] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.190 [2024-12-15 19:36:50.842130] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.190 2024/12/15 19:36:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.191 [2024-12-15 19:36:50.850109] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.191 [2024-12-15 19:36:50.850163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.191 2024/12/15 19:36:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.191 [2024-12-15 19:36:50.862130] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.191 [2024-12-15 19:36:50.862154] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.191 2024/12/15 19:36:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.191 [2024-12-15 19:36:50.870114] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.191 [2024-12-15 19:36:50.870137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.191 2024/12/15 19:36:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.191 [2024-12-15 19:36:50.874592] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.191 [2024-12-15 19:36:50.878122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.191 [2024-12-15 19:36:50.878147] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.191 2024/12/15 19:36:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.191 [2024-12-15 19:36:50.886142] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.191 [2024-12-15 19:36:50.886185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.191 2024/12/15 19:36:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.191 [2024-12-15 19:36:50.894133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.191 [2024-12-15 19:36:50.894156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.191 2024/12/15 19:36:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.191 [2024-12-15 19:36:50.902124] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.191 [2024-12-15 19:36:50.902148] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.191 2024/12/15 19:36:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.191 [2024-12-15 19:36:50.910129] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.191 [2024-12-15 19:36:50.910154] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.191 2024/12/15 19:36:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.191 [2024-12-15 19:36:50.918128] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.191 [2024-12-15 19:36:50.918151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.191 2024/12/15 19:36:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.191 [2024-12-15 19:36:50.926133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.191 [2024-12-15 19:36:50.926157] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.191 2024/12/15 19:36:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.191 [2024-12-15 19:36:50.934135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.191 [2024-12-15 19:36:50.934159] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.191 2024/12/15 19:36:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.191 [2024-12-15 19:36:50.942139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.191 [2024-12-15 19:36:50.942178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.191 [2024-12-15 19:36:50.943284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.191 2024/12/15 19:36:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.191 [2024-12-15 19:36:50.954142] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.191 [2024-12-15 19:36:50.954168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.191 2024/12/15 19:36:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.191 [2024-12-15 19:36:50.962139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.191 [2024-12-15 19:36:50.962163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.191 2024/12/15 19:36:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.191 [2024-12-15 19:36:50.970141] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.191 [2024-12-15 19:36:50.970165] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.191 2024/12/15 19:36:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.191 [2024-12-15 19:36:50.978143] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.191 [2024-12-15 19:36:50.978166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.191 2024/12/15 19:36:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.191 [2024-12-15 19:36:50.986147] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.191 [2024-12-15 19:36:50.986171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.191 2024/12/15 19:36:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.191 [2024-12-15 19:36:50.994150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.191 [2024-12-15 19:36:50.994174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.191 2024/12/15 19:36:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.191 [2024-12-15 19:36:51.002150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.191 [2024-12-15 19:36:51.002175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.191 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.191 [2024-12-15 19:36:51.010159] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.191 [2024-12-15 19:36:51.010184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.191 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.191 [2024-12-15 19:36:51.018154] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.191 [2024-12-15 19:36:51.018177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.191 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.191 [2024-12-15 19:36:51.026162] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.191 [2024-12-15 19:36:51.026186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.191 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.191 [2024-12-15 19:36:51.038176] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.191 [2024-12-15 19:36:51.038202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.191 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.191 [2024-12-15 19:36:51.046176] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.191 [2024-12-15 19:36:51.046199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.191 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.191 [2024-12-15 19:36:51.054173] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.191 [2024-12-15 19:36:51.054196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.191 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.191 [2024-12-15 19:36:51.062172] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.191 [2024-12-15 19:36:51.062196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.191 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.191 [2024-12-15 19:36:51.070201] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.191 [2024-12-15 19:36:51.070229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.191 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.192 [2024-12-15 19:36:51.078187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.192 [2024-12-15 19:36:51.078215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.192 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.451 [2024-12-15 19:36:51.086209] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.451 [2024-12-15 19:36:51.086235] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.451 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.451 [2024-12-15 19:36:51.094183] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.451 [2024-12-15 19:36:51.094210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.451 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.451 [2024-12-15 19:36:51.102214] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.451 [2024-12-15 19:36:51.102242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.451 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.451 [2024-12-15 19:36:51.110189] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.451 [2024-12-15 19:36:51.110215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.451 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.451 [2024-12-15 19:36:51.118223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.451 [2024-12-15 19:36:51.118249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.451 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.451 [2024-12-15 19:36:51.126192] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.451 [2024-12-15 19:36:51.126217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.451 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.451 [2024-12-15 19:36:51.134316] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.451 [2024-12-15 19:36:51.134368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.451 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.451 Running I/O for 5 seconds... 00:16:04.451 [2024-12-15 19:36:51.142311] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.451 [2024-12-15 19:36:51.142341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.451 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.451 [2024-12-15 19:36:51.150540] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.451 [2024-12-15 19:36:51.150568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.452 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.452 [2024-12-15 19:36:51.162671] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.452 [2024-12-15 19:36:51.162729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.452 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.452 [2024-12-15 19:36:51.174729] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.452 [2024-12-15 19:36:51.174770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.452 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.452 [2024-12-15 19:36:51.183258] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.452 [2024-12-15 19:36:51.183287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.452 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.452 [2024-12-15 19:36:51.195120] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.452 [2024-12-15 19:36:51.195162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.452 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.452 [2024-12-15 19:36:51.205784] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.452 [2024-12-15 19:36:51.205814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.452 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.452 [2024-12-15 19:36:51.213734] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.452 [2024-12-15 19:36:51.213763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.452 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.452 [2024-12-15 19:36:51.224990] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.452 [2024-12-15 19:36:51.225020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.452 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.452 [2024-12-15 19:36:51.236006] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.452 [2024-12-15 19:36:51.236047] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.452 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.452 [2024-12-15 19:36:51.244203] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.452 [2024-12-15 19:36:51.244260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.452 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.452 [2024-12-15 19:36:51.255275] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.452 [2024-12-15 19:36:51.255316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.452 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.452 [2024-12-15 19:36:51.263906] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.452 [2024-12-15 19:36:51.263934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.452 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.452 [2024-12-15 19:36:51.272724] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.452 [2024-12-15 19:36:51.272766] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.452 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.452 [2024-12-15 19:36:51.281798] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.452 [2024-12-15 19:36:51.281851] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.452 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.452 [2024-12-15 19:36:51.291022] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.452 [2024-12-15 19:36:51.291064] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.452 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.452 [2024-12-15 19:36:51.300082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.452 [2024-12-15 19:36:51.300128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.452 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.452 [2024-12-15 19:36:51.308891] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.452 [2024-12-15 19:36:51.308933] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.452 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.452 [2024-12-15 19:36:51.317623] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.452 [2024-12-15 19:36:51.317664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.452 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.452 [2024-12-15 19:36:51.326757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.452 [2024-12-15 19:36:51.326799] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.452 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.452 [2024-12-15 19:36:51.335577] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.452 [2024-12-15 19:36:51.335617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.452 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.452 [2024-12-15 19:36:51.344628] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.452 [2024-12-15 19:36:51.344668] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.712 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.712 [2024-12-15 19:36:51.353590] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.712 [2024-12-15 19:36:51.353630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.712 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.712 [2024-12-15 19:36:51.362479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.712 [2024-12-15 19:36:51.362509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.712 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.712 [2024-12-15 19:36:51.371376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.712 [2024-12-15 19:36:51.371404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.712 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.712 [2024-12-15 19:36:51.379962] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.712 [2024-12-15 19:36:51.379991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.712 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.712 [2024-12-15 19:36:51.388584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.712 [2024-12-15 19:36:51.388624] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.712 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.712 [2024-12-15 19:36:51.397380] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.712 [2024-12-15 19:36:51.397421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.712 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.712 [2024-12-15 19:36:51.406012] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.712 [2024-12-15 19:36:51.406054] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.712 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.712 [2024-12-15 19:36:51.414816] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.712 [2024-12-15 19:36:51.414870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.712 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.712 [2024-12-15 19:36:51.423683] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.712 [2024-12-15 19:36:51.423725] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.712 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.712 [2024-12-15 19:36:51.432814] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.712 [2024-12-15 19:36:51.432874] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.712 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.712 [2024-12-15 19:36:51.441692] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.712 [2024-12-15 19:36:51.441732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.712 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.712 [2024-12-15 19:36:51.450696] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.712 [2024-12-15 19:36:51.450736] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.712 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.712 [2024-12-15 19:36:51.459519] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.712 [2024-12-15 19:36:51.459559] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.712 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.713 [2024-12-15 19:36:51.468553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.713 [2024-12-15 19:36:51.468593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.713 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.713 [2024-12-15 19:36:51.477630] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.713 [2024-12-15 19:36:51.477670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.713 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.713 [2024-12-15 19:36:51.486546] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.713 [2024-12-15 19:36:51.486575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.713 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.713 [2024-12-15 19:36:51.495361] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.713 [2024-12-15 19:36:51.495401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.713 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.713 [2024-12-15 19:36:51.504533] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.713 [2024-12-15 19:36:51.504561] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.713 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.713 [2024-12-15 19:36:51.513373] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.713 [2024-12-15 19:36:51.513414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.713 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.713 [2024-12-15 19:36:51.522118] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.713 [2024-12-15 19:36:51.522158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.713 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.713 [2024-12-15 19:36:51.531147] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.713 [2024-12-15 19:36:51.531203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.713 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.713 [2024-12-15 19:36:51.540161] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.713 [2024-12-15 19:36:51.540203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.713 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.713 [2024-12-15 19:36:51.548925] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.713 [2024-12-15 19:36:51.548966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.713 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.713 [2024-12-15 19:36:51.558191] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.713 [2024-12-15 19:36:51.558232] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.713 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.713 [2024-12-15 19:36:51.566933] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.713 [2024-12-15 19:36:51.566974] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.713 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.713 [2024-12-15 19:36:51.575526] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.713 [2024-12-15 19:36:51.575566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.713 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.713 [2024-12-15 19:36:51.584516] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.713 [2024-12-15 19:36:51.584556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.713 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.713 [2024-12-15 19:36:51.593394] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.713 [2024-12-15 19:36:51.593434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.713 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.713 [2024-12-15 19:36:51.602302] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.713 [2024-12-15 19:36:51.602331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.713 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.973 [2024-12-15 19:36:51.611235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.973 [2024-12-15 19:36:51.611275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.973 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.973 [2024-12-15 19:36:51.620094] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.973 [2024-12-15 19:36:51.620134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.973 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.973 [2024-12-15 19:36:51.628900] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.973 [2024-12-15 19:36:51.628941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.973 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.973 [2024-12-15 19:36:51.637719] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.973 [2024-12-15 19:36:51.637760] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.973 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.973 [2024-12-15 19:36:51.646502] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.973 [2024-12-15 19:36:51.646531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.973 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.973 [2024-12-15 19:36:51.655236] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.973 [2024-12-15 19:36:51.655276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.973 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.973 [2024-12-15 19:36:51.663983] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.973 [2024-12-15 19:36:51.664025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.973 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.973 [2024-12-15 19:36:51.677608] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.973 [2024-12-15 19:36:51.677649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.973 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.973 [2024-12-15 19:36:51.692710] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.973 [2024-12-15 19:36:51.692751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.973 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.973 [2024-12-15 19:36:51.709528] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.973 [2024-12-15 19:36:51.709570] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.973 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.973 [2024-12-15 19:36:51.725210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.973 [2024-12-15 19:36:51.725238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.973 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.973 [2024-12-15 19:36:51.740215] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.973 [2024-12-15 19:36:51.740258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.973 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.973 [2024-12-15 19:36:51.756016] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.973 [2024-12-15 19:36:51.756047] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.973 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.973 [2024-12-15 19:36:51.770126] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.973 [2024-12-15 19:36:51.770156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.973 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.973 [2024-12-15 19:36:51.786017] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.973 [2024-12-15 19:36:51.786046] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.973 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.973 [2024-12-15 19:36:51.801668] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.973 [2024-12-15 19:36:51.801698] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.973 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.973 [2024-12-15 19:36:51.816231] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.973 [2024-12-15 19:36:51.816275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.973 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.973 [2024-12-15 19:36:51.832585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.973 [2024-12-15 19:36:51.832615] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.973 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.973 [2024-12-15 19:36:51.845130] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.973 [2024-12-15 19:36:51.845161] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.973 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.973 [2024-12-15 19:36:51.857402] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.973 [2024-12-15 19:36:51.857431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.973 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.233 [2024-12-15 19:36:51.873161] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.233 [2024-12-15 19:36:51.873193] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.233 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.233 [2024-12-15 19:36:51.889894] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.233 [2024-12-15 19:36:51.889924] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.233 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.233 [2024-12-15 19:36:51.905466] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.233 [2024-12-15 19:36:51.905495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.233 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.233 [2024-12-15 19:36:51.919549] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.233 [2024-12-15 19:36:51.919725] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.233 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.233 [2024-12-15 19:36:51.934948] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.233 [2024-12-15 19:36:51.934978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.233 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.233 [2024-12-15 19:36:51.950808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.233 [2024-12-15 19:36:51.951003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.233 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.233 [2024-12-15 19:36:51.965296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.233 [2024-12-15 19:36:51.965325] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.233 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.233 [2024-12-15 19:36:51.976963] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.233 [2024-12-15 19:36:51.976994] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.233 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.233 [2024-12-15 19:36:51.991935] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.233 [2024-12-15 19:36:51.991963] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.233 2024/12/15 19:36:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.233 [2024-12-15 19:36:52.007879] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.233 [2024-12-15 19:36:52.007909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.233 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.233 [2024-12-15 19:36:52.023889] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.233 [2024-12-15 19:36:52.023918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.233 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.233 [2024-12-15 19:36:52.038062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.233 [2024-12-15 19:36:52.038091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.233 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.233 [2024-12-15 19:36:52.049462] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.233 [2024-12-15 19:36:52.049492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.233 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.233 [2024-12-15 19:36:52.064643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.233 [2024-12-15 19:36:52.064813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.233 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.233 [2024-12-15 19:36:52.080993] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.233 [2024-12-15 19:36:52.081023] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.233 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.233 [2024-12-15 19:36:52.092338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.233 [2024-12-15 19:36:52.092369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.233 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.233 [2024-12-15 19:36:52.108023] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.233 [2024-12-15 19:36:52.108053] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.233 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.233 [2024-12-15 19:36:52.123575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.233 [2024-12-15 19:36:52.123744] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.233 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.492 [2024-12-15 19:36:52.134443] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.492 [2024-12-15 19:36:52.134476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.492 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.492 [2024-12-15 19:36:52.150073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.492 [2024-12-15 19:36:52.150104] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.492 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.492 [2024-12-15 19:36:52.165592] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.492 [2024-12-15 19:36:52.165621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.492 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.492 [2024-12-15 19:36:52.179879] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.492 [2024-12-15 19:36:52.179908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.492 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.492 [2024-12-15 19:36:52.191631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.492 [2024-12-15 19:36:52.191794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.492 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.492 [2024-12-15 19:36:52.207249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.492 [2024-12-15 19:36:52.207280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.492 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.492 [2024-12-15 19:36:52.223150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.492 [2024-12-15 19:36:52.223180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.492 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.492 [2024-12-15 19:36:52.239691] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.492 [2024-12-15 19:36:52.239720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.492 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.492 [2024-12-15 19:36:52.250231] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.492 [2024-12-15 19:36:52.250260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.492 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.492 [2024-12-15 19:36:52.266281] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.492 [2024-12-15 19:36:52.266310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.492 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.493 [2024-12-15 19:36:52.282096] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.493 [2024-12-15 19:36:52.282126] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.493 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.493 [2024-12-15 19:36:52.298410] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.493 [2024-12-15 19:36:52.298443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.493 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.493 [2024-12-15 19:36:52.316152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.493 [2024-12-15 19:36:52.316200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.493 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.493 [2024-12-15 19:36:52.330243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.493 [2024-12-15 19:36:52.330273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.493 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.493 [2024-12-15 19:36:52.345961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.493 [2024-12-15 19:36:52.345990] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.493 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.493 [2024-12-15 19:36:52.362282] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.493 [2024-12-15 19:36:52.362312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.493 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.493 [2024-12-15 19:36:52.378400] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.493 [2024-12-15 19:36:52.378432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.493 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.752 [2024-12-15 19:36:52.395223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.752 [2024-12-15 19:36:52.395254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.752 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.752 [2024-12-15 19:36:52.411648] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.752 [2024-12-15 19:36:52.411679] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.752 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.752 [2024-12-15 19:36:52.428102] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.752 [2024-12-15 19:36:52.428133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.753 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.753 [2024-12-15 19:36:52.444560] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.753 [2024-12-15 19:36:52.444591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.753 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.753 [2024-12-15 19:36:52.460319] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.753 [2024-12-15 19:36:52.460348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.753 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.753 [2024-12-15 19:36:52.476336] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.753 [2024-12-15 19:36:52.476365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.753 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.753 [2024-12-15 19:36:52.487773] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.753 [2024-12-15 19:36:52.487802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.753 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.753 [2024-12-15 19:36:52.503923] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.753 [2024-12-15 19:36:52.503953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.753 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.753 [2024-12-15 19:36:52.519916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.753 [2024-12-15 19:36:52.519946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.753 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.753 [2024-12-15 19:36:52.531767] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.753 [2024-12-15 19:36:52.531797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.753 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.753 [2024-12-15 19:36:52.548119] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.753 [2024-12-15 19:36:52.548148] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.753 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.753 [2024-12-15 19:36:52.563539] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.753 [2024-12-15 19:36:52.563568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.753 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.753 [2024-12-15 19:36:52.579641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.753 [2024-12-15 19:36:52.579670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.753 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.753 [2024-12-15 19:36:52.596569] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.753 [2024-12-15 19:36:52.596598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.753 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.753 [2024-12-15 19:36:52.612577] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.753 [2024-12-15 19:36:52.612607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.753 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.753 [2024-12-15 19:36:52.628534] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.753 [2024-12-15 19:36:52.628563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.753 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.753 [2024-12-15 19:36:52.642822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.753 [2024-12-15 19:36:52.642862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.753 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.012 [2024-12-15 19:36:52.657673] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.012 [2024-12-15 19:36:52.657849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.012 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.012 [2024-12-15 19:36:52.673605] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.012 [2024-12-15 19:36:52.673765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.012 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.012 [2024-12-15 19:36:52.685080] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.012 [2024-12-15 19:36:52.685111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.012 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.012 [2024-12-15 19:36:52.700976] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.012 [2024-12-15 19:36:52.701006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.012 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.012 [2024-12-15 19:36:52.716557] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.012 [2024-12-15 19:36:52.716719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.012 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.012 [2024-12-15 19:36:52.732896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.012 [2024-12-15 19:36:52.732927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.012 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.012 [2024-12-15 19:36:52.749299] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.012 [2024-12-15 19:36:52.749330] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.012 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.012 [2024-12-15 19:36:52.765687] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.012 [2024-12-15 19:36:52.765718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.012 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.012 [2024-12-15 19:36:52.781855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.012 [2024-12-15 19:36:52.781886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.012 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.013 [2024-12-15 19:36:52.792379] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.013 [2024-12-15 19:36:52.792408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.013 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.013 [2024-12-15 19:36:52.807868] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.013 [2024-12-15 19:36:52.807898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.013 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.013 [2024-12-15 19:36:52.823602] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.013 [2024-12-15 19:36:52.823783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.013 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.013 [2024-12-15 19:36:52.837978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.013 [2024-12-15 19:36:52.838007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.013 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.013 [2024-12-15 19:36:52.851952] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.013 [2024-12-15 19:36:52.851981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.013 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.013 [2024-12-15 19:36:52.866937] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.013 [2024-12-15 19:36:52.866969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.013 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.013 [2024-12-15 19:36:52.883917] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.013 [2024-12-15 19:36:52.883948] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.013 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.013 [2024-12-15 19:36:52.900386] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.013 [2024-12-15 19:36:52.900415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.013 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.272 [2024-12-15 19:36:52.916299] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.272 [2024-12-15 19:36:52.916329] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.272 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.272 [2024-12-15 19:36:52.927720] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.272 [2024-12-15 19:36:52.927750] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.272 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.272 [2024-12-15 19:36:52.942758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.272 [2024-12-15 19:36:52.942952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.272 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.272 [2024-12-15 19:36:52.959477] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.272 [2024-12-15 19:36:52.959508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.272 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.272 [2024-12-15 19:36:52.975166] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.272 [2024-12-15 19:36:52.975195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.272 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.272 [2024-12-15 19:36:52.986987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.272 [2024-12-15 19:36:52.987017] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.272 2024/12/15 19:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.272 [2024-12-15 19:36:53.002919] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.272 [2024-12-15 19:36:53.002948] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.272 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.272 [2024-12-15 19:36:53.018471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.272 [2024-12-15 19:36:53.018639] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.272 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.272 [2024-12-15 19:36:53.034747] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.272 [2024-12-15 19:36:53.034777] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.272 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.272 [2024-12-15 19:36:53.050977] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.272 [2024-12-15 19:36:53.051006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.272 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.273 [2024-12-15 19:36:53.062855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.273 [2024-12-15 19:36:53.062917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.273 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.273 [2024-12-15 19:36:53.078002] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.273 [2024-12-15 19:36:53.078033] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.273 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.273 [2024-12-15 19:36:53.094095] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.273 [2024-12-15 19:36:53.094124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.273 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.273 [2024-12-15 19:36:53.108486] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.273 [2024-12-15 19:36:53.108515] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.273 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.273 [2024-12-15 19:36:53.120347] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.273 [2024-12-15 19:36:53.120378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.273 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.273 [2024-12-15 19:36:53.135603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.273 [2024-12-15 19:36:53.135633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.273 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.273 [2024-12-15 19:36:53.152000] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.273 [2024-12-15 19:36:53.152028] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.273 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.273 [2024-12-15 19:36:53.164118] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.273 [2024-12-15 19:36:53.164147] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.273 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.532 [2024-12-15 19:36:53.178944] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.532 [2024-12-15 19:36:53.178973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.532 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.532 [2024-12-15 19:36:53.195183] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.532 [2024-12-15 19:36:53.195212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.532 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.532 [2024-12-15 19:36:53.207153] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.532 [2024-12-15 19:36:53.207183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.532 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.532 [2024-12-15 19:36:53.221719] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.532 [2024-12-15 19:36:53.221907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.532 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.532 [2024-12-15 19:36:53.232711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.532 [2024-12-15 19:36:53.232741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.532 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.532 [2024-12-15 19:36:53.248334] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.532 [2024-12-15 19:36:53.248365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.532 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.532 [2024-12-15 19:36:53.264198] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.532 [2024-12-15 19:36:53.264227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.532 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.532 [2024-12-15 19:36:53.280424] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.532 [2024-12-15 19:36:53.280453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.532 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.532 [2024-12-15 19:36:53.296256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.532 [2024-12-15 19:36:53.296286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.532 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.532 [2024-12-15 19:36:53.312323] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.532 [2024-12-15 19:36:53.312352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.532 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.532 [2024-12-15 19:36:53.327994] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.532 [2024-12-15 19:36:53.328024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.532 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.532 [2024-12-15 19:36:53.341762] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.532 [2024-12-15 19:36:53.341969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.532 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.532 [2024-12-15 19:36:53.357160] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.532 [2024-12-15 19:36:53.357335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.532 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.532 [2024-12-15 19:36:53.373373] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.532 [2024-12-15 19:36:53.373402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.532 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.532 [2024-12-15 19:36:53.389537] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.532 [2024-12-15 19:36:53.389567] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.532 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.532 [2024-12-15 19:36:53.401726] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.532 [2024-12-15 19:36:53.401757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.532 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.533 [2024-12-15 19:36:53.413435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.533 [2024-12-15 19:36:53.413464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.533 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.792 [2024-12-15 19:36:53.429095] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.792 [2024-12-15 19:36:53.429124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.792 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.792 [2024-12-15 19:36:53.449285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.792 [2024-12-15 19:36:53.449315] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.792 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.792 [2024-12-15 19:36:53.466701] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.792 [2024-12-15 19:36:53.466731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.792 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.792 [2024-12-15 19:36:53.482510] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.792 [2024-12-15 19:36:53.482543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.792 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.792 [2024-12-15 19:36:53.499623] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.792 [2024-12-15 19:36:53.499654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.792 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.792 [2024-12-15 19:36:53.516310] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.792 [2024-12-15 19:36:53.516458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.792 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.792 [2024-12-15 19:36:53.532892] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.792 [2024-12-15 19:36:53.532923] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.792 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.792 [2024-12-15 19:36:53.549751] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.792 [2024-12-15 19:36:53.549782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.792 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.792 [2024-12-15 19:36:53.565896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.792 [2024-12-15 19:36:53.565926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.792 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.792 [2024-12-15 19:36:53.581793] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.792 [2024-12-15 19:36:53.581839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.792 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.792 [2024-12-15 19:36:53.595705] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.792 [2024-12-15 19:36:53.595878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.792 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.792 [2024-12-15 19:36:53.611978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.792 [2024-12-15 19:36:53.612006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.792 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.792 [2024-12-15 19:36:53.627549] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.792 [2024-12-15 19:36:53.627727] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.792 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.792 [2024-12-15 19:36:53.641456] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.792 [2024-12-15 19:36:53.641488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.792 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.792 [2024-12-15 19:36:53.656626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.792 [2024-12-15 19:36:53.656657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.792 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.792 [2024-12-15 19:36:53.673095] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.792 [2024-12-15 19:36:53.673125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.792 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.051 [2024-12-15 19:36:53.689121] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.052 [2024-12-15 19:36:53.689150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.052 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.052 [2024-12-15 19:36:53.700476] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.052 [2024-12-15 19:36:53.700505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.052 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.052 [2024-12-15 19:36:53.717451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.052 [2024-12-15 19:36:53.717479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.052 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.052 [2024-12-15 19:36:53.732249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.052 [2024-12-15 19:36:53.732278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.052 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.052 [2024-12-15 19:36:53.747578] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.052 [2024-12-15 19:36:53.747609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.052 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.052 [2024-12-15 19:36:53.764423] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.052 [2024-12-15 19:36:53.764453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.052 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.052 [2024-12-15 19:36:53.781083] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.052 [2024-12-15 19:36:53.781114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.052 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.052 [2024-12-15 19:36:53.796395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.052 [2024-12-15 19:36:53.796555] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.052 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.052 [2024-12-15 19:36:53.811690] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.052 [2024-12-15 19:36:53.811858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.052 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.052 [2024-12-15 19:36:53.827777] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.052 [2024-12-15 19:36:53.827962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.052 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.052 [2024-12-15 19:36:53.842581] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.052 [2024-12-15 19:36:53.842745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.052 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.052 [2024-12-15 19:36:53.853816] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.052 [2024-12-15 19:36:53.854009] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.052 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.052 [2024-12-15 19:36:53.868875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.052 [2024-12-15 19:36:53.869037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.052 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.052 [2024-12-15 19:36:53.885673] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.052 [2024-12-15 19:36:53.885705] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.052 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.052 [2024-12-15 19:36:53.902711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.052 [2024-12-15 19:36:53.902741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.052 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.052 [2024-12-15 19:36:53.919173] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.052 [2024-12-15 19:36:53.919381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.052 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.052 [2024-12-15 19:36:53.935872] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.052 [2024-12-15 19:36:53.935921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.052 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.311 [2024-12-15 19:36:53.949715] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.311 [2024-12-15 19:36:53.949745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.311 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.311 [2024-12-15 19:36:53.966180] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.311 [2024-12-15 19:36:53.966422] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.311 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.311 [2024-12-15 19:36:53.981983] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.311 [2024-12-15 19:36:53.982013] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.312 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.312 [2024-12-15 19:36:53.994193] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.312 [2024-12-15 19:36:53.994223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.312 2024/12/15 19:36:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.312 [2024-12-15 19:36:54.009213] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.312 [2024-12-15 19:36:54.009392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.312 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.312 [2024-12-15 19:36:54.026114] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.312 [2024-12-15 19:36:54.026143] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.312 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.312 [2024-12-15 19:36:54.041642] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.312 [2024-12-15 19:36:54.041672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.312 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.312 [2024-12-15 19:36:54.056015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.312 [2024-12-15 19:36:54.056046] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.312 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.312 [2024-12-15 19:36:54.071729] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.312 [2024-12-15 19:36:54.071941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.312 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.312 [2024-12-15 19:36:54.087322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.312 [2024-12-15 19:36:54.087499] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.312 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.312 [2024-12-15 19:36:54.102101] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.312 [2024-12-15 19:36:54.102134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.312 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.312 [2024-12-15 19:36:54.112693] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.312 [2024-12-15 19:36:54.112723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.312 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.312 [2024-12-15 19:36:54.128313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.312 [2024-12-15 19:36:54.128342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.312 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.312 [2024-12-15 19:36:54.143621] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.312 [2024-12-15 19:36:54.143650] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.312 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.312 [2024-12-15 19:36:54.158042] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.312 [2024-12-15 19:36:54.158072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.312 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.312 [2024-12-15 19:36:54.173006] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.312 [2024-12-15 19:36:54.173037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.312 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.312 [2024-12-15 19:36:54.185272] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.312 [2024-12-15 19:36:54.185302] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.312 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.312 [2024-12-15 19:36:54.196503] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.312 [2024-12-15 19:36:54.196532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.312 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.572 [2024-12-15 19:36:54.211703] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.572 [2024-12-15 19:36:54.211911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.572 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.572 [2024-12-15 19:36:54.228643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.572 [2024-12-15 19:36:54.228673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.572 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.572 [2024-12-15 19:36:54.244709] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.572 [2024-12-15 19:36:54.244739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.572 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.572 [2024-12-15 19:36:54.260793] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.572 [2024-12-15 19:36:54.260833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.572 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.572 [2024-12-15 19:36:54.276959] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.572 [2024-12-15 19:36:54.276988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.572 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.572 [2024-12-15 19:36:54.287910] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.572 [2024-12-15 19:36:54.287940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.572 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.572 [2024-12-15 19:36:54.302790] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.572 [2024-12-15 19:36:54.302859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.572 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.572 [2024-12-15 19:36:54.319037] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.572 [2024-12-15 19:36:54.319066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.572 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.572 [2024-12-15 19:36:54.335409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.572 [2024-12-15 19:36:54.335438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.572 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.572 [2024-12-15 19:36:54.351687] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.572 [2024-12-15 19:36:54.351717] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.572 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.572 [2024-12-15 19:36:54.368585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.572 [2024-12-15 19:36:54.368615] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.572 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.572 [2024-12-15 19:36:54.384776] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.572 [2024-12-15 19:36:54.384805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.572 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.572 [2024-12-15 19:36:54.400622] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.572 [2024-12-15 19:36:54.400653] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.572 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.572 [2024-12-15 19:36:54.415046] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.572 [2024-12-15 19:36:54.415075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.572 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.572 [2024-12-15 19:36:54.429869] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.572 [2024-12-15 19:36:54.429897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.572 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.572 [2024-12-15 19:36:54.446272] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.572 [2024-12-15 19:36:54.446302] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.572 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.572 [2024-12-15 19:36:54.462107] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.572 [2024-12-15 19:36:54.462154] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.572 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.831 [2024-12-15 19:36:54.477468] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.831 [2024-12-15 19:36:54.477515] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.831 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.831 [2024-12-15 19:36:54.493334] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.831 [2024-12-15 19:36:54.493381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.831 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.831 [2024-12-15 19:36:54.507329] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.831 [2024-12-15 19:36:54.507375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.831 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.831 [2024-12-15 19:36:54.522123] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.831 [2024-12-15 19:36:54.522169] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.831 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.831 [2024-12-15 19:36:54.534083] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.831 [2024-12-15 19:36:54.534114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.831 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.831 [2024-12-15 19:36:54.550024] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.831 [2024-12-15 19:36:54.550070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.831 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.831 [2024-12-15 19:36:54.565716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.831 [2024-12-15 19:36:54.565762] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.831 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.831 [2024-12-15 19:36:54.577464] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.831 [2024-12-15 19:36:54.577510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.831 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.832 [2024-12-15 19:36:54.592728] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.832 [2024-12-15 19:36:54.592775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.832 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.832 [2024-12-15 19:36:54.609764] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.832 [2024-12-15 19:36:54.609809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.832 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.832 [2024-12-15 19:36:54.625776] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.832 [2024-12-15 19:36:54.625846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.832 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.832 [2024-12-15 19:36:54.642600] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.832 [2024-12-15 19:36:54.642648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.832 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.832 [2024-12-15 19:36:54.658907] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.832 [2024-12-15 19:36:54.658956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.832 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.832 [2024-12-15 19:36:54.675291] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.832 [2024-12-15 19:36:54.675339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.832 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.832 [2024-12-15 19:36:54.692143] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.832 [2024-12-15 19:36:54.692192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.832 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.832 [2024-12-15 19:36:54.708813] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.832 [2024-12-15 19:36:54.708885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.832 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.832 [2024-12-15 19:36:54.724975] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.832 [2024-12-15 19:36:54.725023] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.096 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.096 [2024-12-15 19:36:54.741172] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.096 [2024-12-15 19:36:54.741218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.096 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.096 [2024-12-15 19:36:54.757655] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.096 [2024-12-15 19:36:54.757703] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.096 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.096 [2024-12-15 19:36:54.774742] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.096 [2024-12-15 19:36:54.774790] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.097 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.097 [2024-12-15 19:36:54.790776] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.097 [2024-12-15 19:36:54.790836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.097 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.097 [2024-12-15 19:36:54.802175] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.097 [2024-12-15 19:36:54.802221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.097 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.097 [2024-12-15 19:36:54.818501] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.097 [2024-12-15 19:36:54.818550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.097 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.097 [2024-12-15 19:36:54.834839] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.097 [2024-12-15 19:36:54.834876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.097 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.097 [2024-12-15 19:36:54.850589] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.097 [2024-12-15 19:36:54.850621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.097 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.097 [2024-12-15 19:36:54.867299] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.097 [2024-12-15 19:36:54.867346] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.097 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.097 [2024-12-15 19:36:54.883547] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.097 [2024-12-15 19:36:54.883594] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.097 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.097 [2024-12-15 19:36:54.899621] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.097 [2024-12-15 19:36:54.899667] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.097 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.097 [2024-12-15 19:36:54.910616] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.097 [2024-12-15 19:36:54.910685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.097 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.097 [2024-12-15 19:36:54.926903] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.097 [2024-12-15 19:36:54.926952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.097 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.097 [2024-12-15 19:36:54.943182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.097 [2024-12-15 19:36:54.943227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.097 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.097 [2024-12-15 19:36:54.958903] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.097 [2024-12-15 19:36:54.958949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.097 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.097 [2024-12-15 19:36:54.973046] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.097 [2024-12-15 19:36:54.973093] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.097 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.097 [2024-12-15 19:36:54.988869] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.097 [2024-12-15 19:36:54.988914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.356 2024/12/15 19:36:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.356 [2024-12-15 19:36:55.004610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.356 [2024-12-15 19:36:55.004655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.356 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.356 [2024-12-15 19:36:55.020460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.356 [2024-12-15 19:36:55.020505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.356 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.356 [2024-12-15 19:36:55.035306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.356 [2024-12-15 19:36:55.035351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.356 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.356 [2024-12-15 19:36:55.046289] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.356 [2024-12-15 19:36:55.046341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.356 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.356 [2024-12-15 19:36:55.061998] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.356 [2024-12-15 19:36:55.062044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.356 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.356 [2024-12-15 19:36:55.078150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.356 [2024-12-15 19:36:55.078196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.356 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.356 [2024-12-15 19:36:55.094058] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.356 [2024-12-15 19:36:55.094103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.356 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.356 [2024-12-15 19:36:55.110618] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.356 [2024-12-15 19:36:55.110667] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.356 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.356 [2024-12-15 19:36:55.126950] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.356 [2024-12-15 19:36:55.126994] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.356 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.356 [2024-12-15 19:36:55.142568] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.356 [2024-12-15 19:36:55.142615] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.356 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.356 [2024-12-15 19:36:55.157034] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.356 [2024-12-15 19:36:55.157080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.357 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.357 [2024-12-15 19:36:55.167986] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.357 [2024-12-15 19:36:55.168033] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.357 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.357 [2024-12-15 19:36:55.183197] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.357 [2024-12-15 19:36:55.183259] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.357 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.357 [2024-12-15 19:36:55.199187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.357 [2024-12-15 19:36:55.199251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.357 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.357 [2024-12-15 19:36:55.215633] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.357 [2024-12-15 19:36:55.215681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.357 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.357 [2024-12-15 19:36:55.231790] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.357 [2024-12-15 19:36:55.231849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.357 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.357 [2024-12-15 19:36:55.247483] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.357 [2024-12-15 19:36:55.247513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.357 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.615 [2024-12-15 19:36:55.261957] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.615 [2024-12-15 19:36:55.261986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.615 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.615 [2024-12-15 19:36:55.273543] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.615 [2024-12-15 19:36:55.273572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.615 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.615 [2024-12-15 19:36:55.289129] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.615 [2024-12-15 19:36:55.289169] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.615 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.615 [2024-12-15 19:36:55.305620] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.615 [2024-12-15 19:36:55.305650] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.615 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.615 [2024-12-15 19:36:55.322051] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.615 [2024-12-15 19:36:55.322081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.615 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.615 [2024-12-15 19:36:55.338575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.615 [2024-12-15 19:36:55.338605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.615 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.615 [2024-12-15 19:36:55.354575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.615 [2024-12-15 19:36:55.354605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.616 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.616 [2024-12-15 19:36:55.366232] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.616 [2024-12-15 19:36:55.366260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.616 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.616 [2024-12-15 19:36:55.382039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.616 [2024-12-15 19:36:55.382066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.616 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.616 [2024-12-15 19:36:55.397601] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.616 [2024-12-15 19:36:55.397630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.616 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.616 [2024-12-15 19:36:55.409570] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.616 [2024-12-15 19:36:55.409600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.616 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.616 [2024-12-15 19:36:55.424926] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.616 [2024-12-15 19:36:55.424955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.616 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.616 [2024-12-15 19:36:55.440760] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.616 [2024-12-15 19:36:55.440789] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.616 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.616 [2024-12-15 19:36:55.456248] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.616 [2024-12-15 19:36:55.456277] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.616 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.616 [2024-12-15 19:36:55.472066] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.616 [2024-12-15 19:36:55.472094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.616 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.616 [2024-12-15 19:36:55.483413] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.616 [2024-12-15 19:36:55.483441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.616 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.616 [2024-12-15 19:36:55.499292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.616 [2024-12-15 19:36:55.499321] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.616 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.875 [2024-12-15 19:36:55.515187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.875 [2024-12-15 19:36:55.515217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.875 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.875 [2024-12-15 19:36:55.531548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.875 [2024-12-15 19:36:55.531577] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.875 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.875 [2024-12-15 19:36:55.547184] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.875 [2024-12-15 19:36:55.547213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.875 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.875 [2024-12-15 19:36:55.560923] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.875 [2024-12-15 19:36:55.560951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.875 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.875 [2024-12-15 19:36:55.577099] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.875 [2024-12-15 19:36:55.577127] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.875 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.875 [2024-12-15 19:36:55.593138] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.875 [2024-12-15 19:36:55.593167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.875 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.875 [2024-12-15 19:36:55.610028] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.875 [2024-12-15 19:36:55.610057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.875 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.875 [2024-12-15 19:36:55.625271] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.875 [2024-12-15 19:36:55.625300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.875 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.875 [2024-12-15 19:36:55.636923] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.875 [2024-12-15 19:36:55.636951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.875 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.875 [2024-12-15 19:36:55.652348] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.875 [2024-12-15 19:36:55.652377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.875 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.875 [2024-12-15 19:36:55.668464] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.875 [2024-12-15 19:36:55.668493] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.875 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.875 [2024-12-15 19:36:55.684863] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.875 [2024-12-15 19:36:55.684892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.875 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.875 [2024-12-15 19:36:55.701574] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.875 [2024-12-15 19:36:55.701603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.875 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.875 [2024-12-15 19:36:55.717213] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.875 [2024-12-15 19:36:55.717241] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.875 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.875 [2024-12-15 19:36:55.732369] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.875 [2024-12-15 19:36:55.732410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.875 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.875 [2024-12-15 19:36:55.748648] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.875 [2024-12-15 19:36:55.748677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.875 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.875 [2024-12-15 19:36:55.766152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.875 [2024-12-15 19:36:55.766212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.875 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.134 [2024-12-15 19:36:55.781391] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.134 [2024-12-15 19:36:55.781433] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.134 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.134 [2024-12-15 19:36:55.797107] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.134 [2024-12-15 19:36:55.797138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.134 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.134 [2024-12-15 19:36:55.814063] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.134 [2024-12-15 19:36:55.814093] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.134 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.134 [2024-12-15 19:36:55.830585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.134 [2024-12-15 19:36:55.830617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.134 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.134 [2024-12-15 19:36:55.846756] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.134 [2024-12-15 19:36:55.846786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.134 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.134 [2024-12-15 19:36:55.862721] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.134 [2024-12-15 19:36:55.862751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.134 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.134 [2024-12-15 19:36:55.877014] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.134 [2024-12-15 19:36:55.877045] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.134 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.134 [2024-12-15 19:36:55.888683] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.134 [2024-12-15 19:36:55.888713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.134 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.134 [2024-12-15 19:36:55.904402] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.135 [2024-12-15 19:36:55.904442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.135 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.135 [2024-12-15 19:36:55.920205] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.135 [2024-12-15 19:36:55.920234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.135 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.135 [2024-12-15 19:36:55.936867] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.135 [2024-12-15 19:36:55.936895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.135 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.135 [2024-12-15 19:36:55.952374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.135 [2024-12-15 19:36:55.952414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.135 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.135 [2024-12-15 19:36:55.963919] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.135 [2024-12-15 19:36:55.963949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.135 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.135 [2024-12-15 19:36:55.979652] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.135 [2024-12-15 19:36:55.979682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.135 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.135 [2024-12-15 19:36:55.995481] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.135 [2024-12-15 19:36:55.995510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.135 2024/12/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.135 [2024-12-15 19:36:56.007407] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.135 [2024-12-15 19:36:56.007436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.135 2024/12/15 19:36:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.135 [2024-12-15 19:36:56.022170] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.135 [2024-12-15 19:36:56.022199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.135 2024/12/15 19:36:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.394 [2024-12-15 19:36:56.038054] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.394 [2024-12-15 19:36:56.038085] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.394 2024/12/15 19:36:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.394 [2024-12-15 19:36:56.052200] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.394 [2024-12-15 19:36:56.052230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.394 2024/12/15 19:36:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.394 [2024-12-15 19:36:56.067441] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.394 [2024-12-15 19:36:56.067472] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.394 2024/12/15 19:36:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.394 [2024-12-15 19:36:56.083753] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.394 [2024-12-15 19:36:56.083781] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.394 2024/12/15 19:36:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.394 [2024-12-15 19:36:56.095300] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.394 [2024-12-15 19:36:56.095329] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.394 2024/12/15 19:36:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.394 [2024-12-15 19:36:56.111425] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.394 [2024-12-15 19:36:56.111453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.394 2024/12/15 19:36:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.394 [2024-12-15 19:36:56.127428] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.394 [2024-12-15 19:36:56.127469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.394 2024/12/15 19:36:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.394 [2024-12-15 19:36:56.139139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.394 [2024-12-15 19:36:56.139169] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.394 2024/12/15 19:36:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.394 00:16:09.394 Latency(us) 00:16:09.394 [2024-12-15T19:36:56.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:09.394 [2024-12-15T19:36:56.290Z] Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:09.394 Nvme1n1 : 5.00 14444.34 112.85 0.00 0.00 8852.37 3753.43 19303.33 00:16:09.394 [2024-12-15T19:36:56.290Z] =================================================================================================================== 00:16:09.394 [2024-12-15T19:36:56.290Z] Total : 14444.34 112.85 0.00 0.00 8852.37 3753.43 19303.33 00:16:09.394 [2024-12-15 19:36:56.149799] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.394 [2024-12-15 19:36:56.149837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.394 2024/12/15 19:36:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.394 [2024-12-15 19:36:56.161792] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.394 [2024-12-15 19:36:56.161832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.394 2024/12/15 19:36:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.394 [2024-12-15 19:36:56.173800] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.394 [2024-12-15 19:36:56.173838] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.394 2024/12/15 19:36:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.394 [2024-12-15 19:36:56.185808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.394 [2024-12-15 19:36:56.185847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.394 2024/12/15 19:36:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.394 [2024-12-15 19:36:56.197808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.394 [2024-12-15 19:36:56.197845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.394 2024/12/15 19:36:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.394 [2024-12-15 19:36:56.209807] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.394 [2024-12-15 19:36:56.209843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.394 2024/12/15 19:36:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.394 [2024-12-15 19:36:56.221813] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.394 [2024-12-15 19:36:56.221849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.394 2024/12/15 19:36:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.394 [2024-12-15 19:36:56.233818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.394 [2024-12-15 19:36:56.233881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.394 2024/12/15 19:36:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.394 [2024-12-15 19:36:56.245839] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.394 [2024-12-15 19:36:56.245871] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.395 2024/12/15 19:36:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.395 [2024-12-15 19:36:56.257828] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.395 [2024-12-15 19:36:56.257865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.395 2024/12/15 19:36:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.395 [2024-12-15 19:36:56.269840] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.395 [2024-12-15 19:36:56.269867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.395 2024/12/15 19:36:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.395 [2024-12-15 19:36:56.281843] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.395 [2024-12-15 19:36:56.281869] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.395 2024/12/15 19:36:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.653 [2024-12-15 19:36:56.293845] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.653 [2024-12-15 19:36:56.293870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.653 2024/12/15 19:36:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.653 [2024-12-15 19:36:56.305845] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.653 [2024-12-15 19:36:56.305871] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.653 2024/12/15 19:36:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.653 [2024-12-15 19:36:56.317846] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.653 [2024-12-15 19:36:56.317871] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.653 2024/12/15 19:36:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.653 [2024-12-15 19:36:56.329842] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.653 [2024-12-15 19:36:56.329864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.653 2024/12/15 19:36:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.653 [2024-12-15 19:36:56.341844] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.653 [2024-12-15 19:36:56.341866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.653 2024/12/15 19:36:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.653 [2024-12-15 19:36:56.353897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.653 [2024-12-15 19:36:56.353944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.653 2024/12/15 19:36:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.653 [2024-12-15 19:36:56.365852] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.653 [2024-12-15 19:36:56.365876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.653 2024/12/15 19:36:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.653 [2024-12-15 19:36:56.377849] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.653 [2024-12-15 19:36:56.377873] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.653 2024/12/15 19:36:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.653 [2024-12-15 19:36:56.389899] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.653 [2024-12-15 19:36:56.390121] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.653 2024/12/15 19:36:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.653 [2024-12-15 19:36:56.401857] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.653 [2024-12-15 19:36:56.401882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.653 2024/12/15 19:36:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.653 [2024-12-15 19:36:56.413869] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.653 [2024-12-15 19:36:56.413893] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.653 2024/12/15 19:36:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.653 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (86161) - No such process 00:16:09.653 19:36:56 -- target/zcopy.sh@49 -- # wait 86161 00:16:09.653 19:36:56 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:09.653 19:36:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.653 19:36:56 -- common/autotest_common.sh@10 -- # set +x 00:16:09.653 19:36:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.653 19:36:56 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:09.653 19:36:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.653 19:36:56 -- common/autotest_common.sh@10 -- # set +x 00:16:09.653 delay0 00:16:09.653 19:36:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.653 19:36:56 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:09.653 19:36:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.653 19:36:56 -- common/autotest_common.sh@10 -- # set +x 00:16:09.653 19:36:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.653 19:36:56 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:09.911 [2024-12-15 19:36:56.613317] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:16.577 Initializing NVMe Controllers 00:16:16.577 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:16.577 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:16.577 Initialization complete. Launching workers. 00:16:16.577 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 77 00:16:16.577 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 364, failed to submit 33 00:16:16.577 success 185, unsuccess 179, failed 0 00:16:16.577 19:37:02 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:16.577 19:37:02 -- target/zcopy.sh@60 -- # nvmftestfini 00:16:16.577 19:37:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:16.577 19:37:02 -- nvmf/common.sh@116 -- # sync 00:16:16.577 19:37:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:16.577 19:37:02 -- nvmf/common.sh@119 -- # set +e 00:16:16.577 19:37:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:16.577 19:37:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:16.577 rmmod nvme_tcp 00:16:16.577 rmmod nvme_fabrics 00:16:16.577 rmmod nvme_keyring 00:16:16.577 19:37:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:16.577 19:37:02 -- nvmf/common.sh@123 -- # set -e 00:16:16.577 19:37:02 -- nvmf/common.sh@124 -- # return 0 00:16:16.577 19:37:02 -- nvmf/common.sh@477 -- # '[' -n 85997 ']' 00:16:16.577 19:37:02 -- nvmf/common.sh@478 -- # killprocess 85997 00:16:16.577 19:37:02 -- common/autotest_common.sh@936 -- # '[' -z 85997 ']' 00:16:16.577 19:37:02 -- common/autotest_common.sh@940 -- # kill -0 85997 00:16:16.577 19:37:02 -- common/autotest_common.sh@941 -- # uname 00:16:16.577 19:37:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:16.577 19:37:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85997 00:16:16.577 19:37:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:16.577 19:37:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:16.577 killing process with pid 85997 00:16:16.577 19:37:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85997' 00:16:16.577 19:37:02 -- common/autotest_common.sh@955 -- # kill 85997 00:16:16.577 19:37:02 -- common/autotest_common.sh@960 -- # wait 85997 00:16:16.577 19:37:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:16.577 19:37:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:16.577 19:37:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:16.577 19:37:03 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:16.577 19:37:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:16.577 19:37:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.577 19:37:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:16.577 19:37:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.577 19:37:03 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:16.577 00:16:16.577 real 0m24.853s 00:16:16.577 user 0m40.230s 00:16:16.577 sys 0m6.507s 00:16:16.577 19:37:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:16.577 ************************************ 00:16:16.577 19:37:03 -- common/autotest_common.sh@10 -- # set +x 00:16:16.577 END TEST nvmf_zcopy 00:16:16.577 ************************************ 00:16:16.577 19:37:03 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:16.577 19:37:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:16.577 19:37:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:16.577 19:37:03 -- common/autotest_common.sh@10 -- # set +x 00:16:16.577 ************************************ 00:16:16.577 START TEST nvmf_nmic 00:16:16.577 ************************************ 00:16:16.577 19:37:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:16.577 * Looking for test storage... 00:16:16.577 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:16.577 19:37:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:16.577 19:37:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:16.577 19:37:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:16.577 19:37:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:16.577 19:37:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:16.577 19:37:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:16.577 19:37:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:16.577 19:37:03 -- scripts/common.sh@335 -- # IFS=.-: 00:16:16.577 19:37:03 -- scripts/common.sh@335 -- # read -ra ver1 00:16:16.577 19:37:03 -- scripts/common.sh@336 -- # IFS=.-: 00:16:16.577 19:37:03 -- scripts/common.sh@336 -- # read -ra ver2 00:16:16.577 19:37:03 -- scripts/common.sh@337 -- # local 'op=<' 00:16:16.577 19:37:03 -- scripts/common.sh@339 -- # ver1_l=2 00:16:16.577 19:37:03 -- scripts/common.sh@340 -- # ver2_l=1 00:16:16.577 19:37:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:16.577 19:37:03 -- scripts/common.sh@343 -- # case "$op" in 00:16:16.577 19:37:03 -- scripts/common.sh@344 -- # : 1 00:16:16.577 19:37:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:16.577 19:37:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:16.577 19:37:03 -- scripts/common.sh@364 -- # decimal 1 00:16:16.577 19:37:03 -- scripts/common.sh@352 -- # local d=1 00:16:16.577 19:37:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:16.577 19:37:03 -- scripts/common.sh@354 -- # echo 1 00:16:16.577 19:37:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:16.577 19:37:03 -- scripts/common.sh@365 -- # decimal 2 00:16:16.577 19:37:03 -- scripts/common.sh@352 -- # local d=2 00:16:16.577 19:37:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:16.577 19:37:03 -- scripts/common.sh@354 -- # echo 2 00:16:16.577 19:37:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:16.577 19:37:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:16.577 19:37:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:16.577 19:37:03 -- scripts/common.sh@367 -- # return 0 00:16:16.577 19:37:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:16.577 19:37:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:16.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.577 --rc genhtml_branch_coverage=1 00:16:16.577 --rc genhtml_function_coverage=1 00:16:16.577 --rc genhtml_legend=1 00:16:16.577 --rc geninfo_all_blocks=1 00:16:16.577 --rc geninfo_unexecuted_blocks=1 00:16:16.577 00:16:16.577 ' 00:16:16.577 19:37:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:16.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.577 --rc genhtml_branch_coverage=1 00:16:16.577 --rc genhtml_function_coverage=1 00:16:16.577 --rc genhtml_legend=1 00:16:16.577 --rc geninfo_all_blocks=1 00:16:16.577 --rc geninfo_unexecuted_blocks=1 00:16:16.577 00:16:16.577 ' 00:16:16.577 19:37:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:16.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.577 --rc genhtml_branch_coverage=1 00:16:16.577 --rc genhtml_function_coverage=1 00:16:16.578 --rc genhtml_legend=1 00:16:16.578 --rc geninfo_all_blocks=1 00:16:16.578 --rc geninfo_unexecuted_blocks=1 00:16:16.578 00:16:16.578 ' 00:16:16.578 19:37:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:16.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.578 --rc genhtml_branch_coverage=1 00:16:16.578 --rc genhtml_function_coverage=1 00:16:16.578 --rc genhtml_legend=1 00:16:16.578 --rc geninfo_all_blocks=1 00:16:16.578 --rc geninfo_unexecuted_blocks=1 00:16:16.578 00:16:16.578 ' 00:16:16.578 19:37:03 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:16.578 19:37:03 -- nvmf/common.sh@7 -- # uname -s 00:16:16.578 19:37:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:16.578 19:37:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:16.578 19:37:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:16.578 19:37:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:16.578 19:37:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:16.578 19:37:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:16.578 19:37:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:16.578 19:37:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:16.578 19:37:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:16.578 19:37:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:16.578 19:37:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:16:16.578 19:37:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:16:16.578 19:37:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:16.578 19:37:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:16.578 19:37:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:16.578 19:37:03 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:16.578 19:37:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:16.578 19:37:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:16.578 19:37:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:16.578 19:37:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.578 19:37:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.578 19:37:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.578 19:37:03 -- paths/export.sh@5 -- # export PATH 00:16:16.578 19:37:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.578 19:37:03 -- nvmf/common.sh@46 -- # : 0 00:16:16.578 19:37:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:16.578 19:37:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:16.578 19:37:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:16.578 19:37:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:16.578 19:37:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:16.578 19:37:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:16.578 19:37:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:16.578 19:37:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:16.578 19:37:03 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:16.578 19:37:03 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:16.578 19:37:03 -- target/nmic.sh@14 -- # nvmftestinit 00:16:16.578 19:37:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:16.578 19:37:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:16.578 19:37:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:16.578 19:37:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:16.578 19:37:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:16.578 19:37:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.578 19:37:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:16.578 19:37:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.578 19:37:03 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:16.578 19:37:03 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:16.578 19:37:03 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:16.578 19:37:03 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:16.578 19:37:03 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:16.578 19:37:03 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:16.578 19:37:03 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:16.578 19:37:03 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:16.578 19:37:03 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:16.578 19:37:03 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:16.578 19:37:03 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:16.578 19:37:03 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:16.578 19:37:03 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:16.578 19:37:03 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:16.578 19:37:03 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:16.578 19:37:03 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:16.578 19:37:03 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:16.578 19:37:03 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:16.578 19:37:03 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:16.578 19:37:03 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:16.578 Cannot find device "nvmf_tgt_br" 00:16:16.578 19:37:03 -- nvmf/common.sh@154 -- # true 00:16:16.578 19:37:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:16.578 Cannot find device "nvmf_tgt_br2" 00:16:16.578 19:37:03 -- nvmf/common.sh@155 -- # true 00:16:16.578 19:37:03 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:16.578 19:37:03 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:16.578 Cannot find device "nvmf_tgt_br" 00:16:16.578 19:37:03 -- nvmf/common.sh@157 -- # true 00:16:16.578 19:37:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:16.578 Cannot find device "nvmf_tgt_br2" 00:16:16.578 19:37:03 -- nvmf/common.sh@158 -- # true 00:16:16.578 19:37:03 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:16.837 19:37:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:16.837 19:37:03 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:16.837 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:16.837 19:37:03 -- nvmf/common.sh@161 -- # true 00:16:16.837 19:37:03 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:16.837 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:16.837 19:37:03 -- nvmf/common.sh@162 -- # true 00:16:16.837 19:37:03 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:16.837 19:37:03 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:16.837 19:37:03 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:16.837 19:37:03 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:16.837 19:37:03 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:16.837 19:37:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:16.837 19:37:03 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:16.837 19:37:03 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:16.837 19:37:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:16.837 19:37:03 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:16.837 19:37:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:16.837 19:37:03 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:16.837 19:37:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:16.837 19:37:03 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:16.837 19:37:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:16.837 19:37:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:16.837 19:37:03 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:16.837 19:37:03 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:16.837 19:37:03 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:16.837 19:37:03 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:16.837 19:37:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:16.837 19:37:03 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:16.837 19:37:03 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:16.837 19:37:03 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:16.837 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:16.837 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:16:16.837 00:16:16.837 --- 10.0.0.2 ping statistics --- 00:16:16.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.837 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:16:16.837 19:37:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:16.837 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:16.837 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:16:16.837 00:16:16.837 --- 10.0.0.3 ping statistics --- 00:16:16.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.837 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:16:16.837 19:37:03 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:16.837 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:16.837 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:16.837 00:16:16.837 --- 10.0.0.1 ping statistics --- 00:16:16.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.837 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:16.837 19:37:03 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:16.837 19:37:03 -- nvmf/common.sh@421 -- # return 0 00:16:16.837 19:37:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:16.837 19:37:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:16.837 19:37:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:16.837 19:37:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:16.837 19:37:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:16.837 19:37:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:16.837 19:37:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:17.096 19:37:03 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:17.096 19:37:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:17.096 19:37:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:17.096 19:37:03 -- common/autotest_common.sh@10 -- # set +x 00:16:17.096 19:37:03 -- nvmf/common.sh@469 -- # nvmfpid=86493 00:16:17.096 19:37:03 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:17.096 19:37:03 -- nvmf/common.sh@470 -- # waitforlisten 86493 00:16:17.096 19:37:03 -- common/autotest_common.sh@829 -- # '[' -z 86493 ']' 00:16:17.096 19:37:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.096 19:37:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:17.096 19:37:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.096 19:37:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:17.096 19:37:03 -- common/autotest_common.sh@10 -- # set +x 00:16:17.096 [2024-12-15 19:37:03.804971] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:17.096 [2024-12-15 19:37:03.805051] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.096 [2024-12-15 19:37:03.948731] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:17.355 [2024-12-15 19:37:04.031289] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:17.355 [2024-12-15 19:37:04.031469] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:17.355 [2024-12-15 19:37:04.031486] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:17.355 [2024-12-15 19:37:04.031498] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:17.355 [2024-12-15 19:37:04.031636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.355 [2024-12-15 19:37:04.031691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:17.355 [2024-12-15 19:37:04.031871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:17.355 [2024-12-15 19:37:04.031873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.291 19:37:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:18.291 19:37:04 -- common/autotest_common.sh@862 -- # return 0 00:16:18.291 19:37:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:18.291 19:37:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:18.291 19:37:04 -- common/autotest_common.sh@10 -- # set +x 00:16:18.291 19:37:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:18.291 19:37:04 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:18.291 19:37:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.291 19:37:04 -- common/autotest_common.sh@10 -- # set +x 00:16:18.291 [2024-12-15 19:37:04.893498] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:18.291 19:37:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.291 19:37:04 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:18.291 19:37:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.291 19:37:04 -- common/autotest_common.sh@10 -- # set +x 00:16:18.291 Malloc0 00:16:18.291 19:37:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.291 19:37:04 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:18.291 19:37:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.291 19:37:04 -- common/autotest_common.sh@10 -- # set +x 00:16:18.291 19:37:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.291 19:37:04 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:18.291 19:37:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.291 19:37:04 -- common/autotest_common.sh@10 -- # set +x 00:16:18.291 19:37:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.291 19:37:04 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:18.291 19:37:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.291 19:37:04 -- common/autotest_common.sh@10 -- # set +x 00:16:18.291 [2024-12-15 19:37:04.959017] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:18.291 test case1: single bdev can't be used in multiple subsystems 00:16:18.291 19:37:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.291 19:37:04 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:18.291 19:37:04 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:18.291 19:37:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.291 19:37:04 -- common/autotest_common.sh@10 -- # set +x 00:16:18.291 19:37:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.291 19:37:04 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:18.291 19:37:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.291 19:37:04 -- common/autotest_common.sh@10 -- # set +x 00:16:18.291 19:37:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.291 19:37:04 -- target/nmic.sh@28 -- # nmic_status=0 00:16:18.291 19:37:04 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:18.291 19:37:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.291 19:37:04 -- common/autotest_common.sh@10 -- # set +x 00:16:18.291 [2024-12-15 19:37:04.982831] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:18.291 [2024-12-15 19:37:04.982865] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:18.291 [2024-12-15 19:37:04.982892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.291 2024/12/15 19:37:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:18.291 request: 00:16:18.291 { 00:16:18.291 "method": "nvmf_subsystem_add_ns", 00:16:18.291 "params": { 00:16:18.291 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:18.291 "namespace": { 00:16:18.291 "bdev_name": "Malloc0" 00:16:18.291 } 00:16:18.291 } 00:16:18.291 } 00:16:18.291 Got JSON-RPC error response 00:16:18.291 GoRPCClient: error on JSON-RPC call 00:16:18.291 Adding namespace failed - expected result. 00:16:18.291 test case2: host connect to nvmf target in multiple paths 00:16:18.291 19:37:04 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:18.291 19:37:04 -- target/nmic.sh@29 -- # nmic_status=1 00:16:18.291 19:37:04 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:18.291 19:37:04 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:18.291 19:37:04 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:18.291 19:37:04 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:18.291 19:37:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.291 19:37:04 -- common/autotest_common.sh@10 -- # set +x 00:16:18.291 [2024-12-15 19:37:04.994944] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:18.291 19:37:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.291 19:37:04 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 --hostid=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:18.291 19:37:05 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 --hostid=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:18.549 19:37:05 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:18.549 19:37:05 -- common/autotest_common.sh@1187 -- # local i=0 00:16:18.549 19:37:05 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:16:18.549 19:37:05 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:16:18.549 19:37:05 -- common/autotest_common.sh@1194 -- # sleep 2 00:16:21.081 19:37:07 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:16:21.081 19:37:07 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:16:21.081 19:37:07 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:16:21.081 19:37:07 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:16:21.081 19:37:07 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:16:21.081 19:37:07 -- common/autotest_common.sh@1197 -- # return 0 00:16:21.081 19:37:07 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:21.081 [global] 00:16:21.081 thread=1 00:16:21.081 invalidate=1 00:16:21.081 rw=write 00:16:21.081 time_based=1 00:16:21.081 runtime=1 00:16:21.081 ioengine=libaio 00:16:21.081 direct=1 00:16:21.081 bs=4096 00:16:21.081 iodepth=1 00:16:21.081 norandommap=0 00:16:21.081 numjobs=1 00:16:21.081 00:16:21.081 verify_dump=1 00:16:21.081 verify_backlog=512 00:16:21.081 verify_state_save=0 00:16:21.081 do_verify=1 00:16:21.081 verify=crc32c-intel 00:16:21.081 [job0] 00:16:21.081 filename=/dev/nvme0n1 00:16:21.081 Could not set queue depth (nvme0n1) 00:16:21.081 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:21.081 fio-3.35 00:16:21.081 Starting 1 thread 00:16:22.016 00:16:22.016 job0: (groupid=0, jobs=1): err= 0: pid=86604: Sun Dec 15 19:37:08 2024 00:16:22.016 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:16:22.016 slat (nsec): min=11868, max=50698, avg=14267.74, stdev=3717.17 00:16:22.016 clat (usec): min=110, max=269, avg=132.88, stdev=12.83 00:16:22.016 lat (usec): min=122, max=287, avg=147.15, stdev=13.51 00:16:22.016 clat percentiles (usec): 00:16:22.016 | 1.00th=[ 116], 5.00th=[ 118], 10.00th=[ 120], 20.00th=[ 123], 00:16:22.017 | 30.00th=[ 126], 40.00th=[ 128], 50.00th=[ 130], 60.00th=[ 133], 00:16:22.017 | 70.00th=[ 137], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 157], 00:16:22.017 | 99.00th=[ 167], 99.50th=[ 176], 99.90th=[ 219], 99.95th=[ 237], 00:16:22.017 | 99.99th=[ 269] 00:16:22.017 write: IOPS=3849, BW=15.0MiB/s (15.8MB/s)(15.1MiB/1001msec); 0 zone resets 00:16:22.017 slat (usec): min=18, max=114, avg=22.13, stdev= 6.31 00:16:22.017 clat (usec): min=77, max=253, avg=97.57, stdev=11.11 00:16:22.017 lat (usec): min=97, max=276, avg=119.70, stdev=13.29 00:16:22.017 clat percentiles (usec): 00:16:22.017 | 1.00th=[ 84], 5.00th=[ 86], 10.00th=[ 88], 20.00th=[ 90], 00:16:22.017 | 30.00th=[ 91], 40.00th=[ 93], 50.00th=[ 94], 60.00th=[ 96], 00:16:22.017 | 70.00th=[ 100], 80.00th=[ 105], 90.00th=[ 115], 95.00th=[ 121], 00:16:22.017 | 99.00th=[ 130], 99.50th=[ 133], 99.90th=[ 153], 99.95th=[ 163], 00:16:22.017 | 99.99th=[ 253] 00:16:22.017 bw ( KiB/s): min=16384, max=16384, per=100.00%, avg=16384.00, stdev= 0.00, samples=1 00:16:22.017 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:16:22.017 lat (usec) : 100=36.49%, 250=63.48%, 500=0.03% 00:16:22.017 cpu : usr=2.90%, sys=9.80%, ctx=7438, majf=0, minf=5 00:16:22.017 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:22.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.017 issued rwts: total=3584,3853,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:22.017 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:22.017 00:16:22.017 Run status group 0 (all jobs): 00:16:22.017 READ: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:16:22.017 WRITE: bw=15.0MiB/s (15.8MB/s), 15.0MiB/s-15.0MiB/s (15.8MB/s-15.8MB/s), io=15.1MiB (15.8MB), run=1001-1001msec 00:16:22.017 00:16:22.017 Disk stats (read/write): 00:16:22.017 nvme0n1: ios=3159/3584, merge=0/0, ticks=457/404, in_queue=861, util=91.28% 00:16:22.017 19:37:08 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:22.017 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:22.017 19:37:08 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:22.017 19:37:08 -- common/autotest_common.sh@1208 -- # local i=0 00:16:22.017 19:37:08 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:16:22.017 19:37:08 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:22.017 19:37:08 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:16:22.017 19:37:08 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:22.017 19:37:08 -- common/autotest_common.sh@1220 -- # return 0 00:16:22.017 19:37:08 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:22.017 19:37:08 -- target/nmic.sh@53 -- # nvmftestfini 00:16:22.017 19:37:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:22.017 19:37:08 -- nvmf/common.sh@116 -- # sync 00:16:22.017 19:37:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:22.017 19:37:08 -- nvmf/common.sh@119 -- # set +e 00:16:22.017 19:37:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:22.017 19:37:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:22.017 rmmod nvme_tcp 00:16:22.017 rmmod nvme_fabrics 00:16:22.017 rmmod nvme_keyring 00:16:22.017 19:37:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:22.017 19:37:08 -- nvmf/common.sh@123 -- # set -e 00:16:22.017 19:37:08 -- nvmf/common.sh@124 -- # return 0 00:16:22.017 19:37:08 -- nvmf/common.sh@477 -- # '[' -n 86493 ']' 00:16:22.017 19:37:08 -- nvmf/common.sh@478 -- # killprocess 86493 00:16:22.017 19:37:08 -- common/autotest_common.sh@936 -- # '[' -z 86493 ']' 00:16:22.017 19:37:08 -- common/autotest_common.sh@940 -- # kill -0 86493 00:16:22.017 19:37:08 -- common/autotest_common.sh@941 -- # uname 00:16:22.017 19:37:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:22.017 19:37:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86493 00:16:22.275 killing process with pid 86493 00:16:22.275 19:37:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:22.275 19:37:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:22.275 19:37:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86493' 00:16:22.275 19:37:08 -- common/autotest_common.sh@955 -- # kill 86493 00:16:22.275 19:37:08 -- common/autotest_common.sh@960 -- # wait 86493 00:16:22.534 19:37:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:22.534 19:37:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:22.534 19:37:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:22.534 19:37:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:22.534 19:37:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:22.534 19:37:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.534 19:37:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:22.534 19:37:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.534 19:37:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:22.534 00:16:22.534 real 0m6.083s 00:16:22.534 user 0m20.321s 00:16:22.534 sys 0m1.469s 00:16:22.534 19:37:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:22.534 19:37:09 -- common/autotest_common.sh@10 -- # set +x 00:16:22.534 ************************************ 00:16:22.534 END TEST nvmf_nmic 00:16:22.534 ************************************ 00:16:22.534 19:37:09 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:22.534 19:37:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:22.534 19:37:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:22.534 19:37:09 -- common/autotest_common.sh@10 -- # set +x 00:16:22.534 ************************************ 00:16:22.534 START TEST nvmf_fio_target 00:16:22.534 ************************************ 00:16:22.534 19:37:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:22.534 * Looking for test storage... 00:16:22.534 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:22.534 19:37:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:22.534 19:37:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:22.534 19:37:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:22.793 19:37:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:22.793 19:37:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:22.793 19:37:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:22.793 19:37:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:22.793 19:37:09 -- scripts/common.sh@335 -- # IFS=.-: 00:16:22.793 19:37:09 -- scripts/common.sh@335 -- # read -ra ver1 00:16:22.793 19:37:09 -- scripts/common.sh@336 -- # IFS=.-: 00:16:22.793 19:37:09 -- scripts/common.sh@336 -- # read -ra ver2 00:16:22.793 19:37:09 -- scripts/common.sh@337 -- # local 'op=<' 00:16:22.793 19:37:09 -- scripts/common.sh@339 -- # ver1_l=2 00:16:22.793 19:37:09 -- scripts/common.sh@340 -- # ver2_l=1 00:16:22.793 19:37:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:22.793 19:37:09 -- scripts/common.sh@343 -- # case "$op" in 00:16:22.793 19:37:09 -- scripts/common.sh@344 -- # : 1 00:16:22.793 19:37:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:22.793 19:37:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:22.793 19:37:09 -- scripts/common.sh@364 -- # decimal 1 00:16:22.793 19:37:09 -- scripts/common.sh@352 -- # local d=1 00:16:22.793 19:37:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:22.793 19:37:09 -- scripts/common.sh@354 -- # echo 1 00:16:22.793 19:37:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:22.793 19:37:09 -- scripts/common.sh@365 -- # decimal 2 00:16:22.793 19:37:09 -- scripts/common.sh@352 -- # local d=2 00:16:22.793 19:37:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:22.793 19:37:09 -- scripts/common.sh@354 -- # echo 2 00:16:22.793 19:37:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:22.793 19:37:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:22.793 19:37:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:22.793 19:37:09 -- scripts/common.sh@367 -- # return 0 00:16:22.793 19:37:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:22.793 19:37:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:22.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.793 --rc genhtml_branch_coverage=1 00:16:22.793 --rc genhtml_function_coverage=1 00:16:22.793 --rc genhtml_legend=1 00:16:22.793 --rc geninfo_all_blocks=1 00:16:22.793 --rc geninfo_unexecuted_blocks=1 00:16:22.793 00:16:22.793 ' 00:16:22.793 19:37:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:22.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.793 --rc genhtml_branch_coverage=1 00:16:22.793 --rc genhtml_function_coverage=1 00:16:22.793 --rc genhtml_legend=1 00:16:22.793 --rc geninfo_all_blocks=1 00:16:22.793 --rc geninfo_unexecuted_blocks=1 00:16:22.793 00:16:22.793 ' 00:16:22.793 19:37:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:22.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.794 --rc genhtml_branch_coverage=1 00:16:22.794 --rc genhtml_function_coverage=1 00:16:22.794 --rc genhtml_legend=1 00:16:22.794 --rc geninfo_all_blocks=1 00:16:22.794 --rc geninfo_unexecuted_blocks=1 00:16:22.794 00:16:22.794 ' 00:16:22.794 19:37:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:22.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.794 --rc genhtml_branch_coverage=1 00:16:22.794 --rc genhtml_function_coverage=1 00:16:22.794 --rc genhtml_legend=1 00:16:22.794 --rc geninfo_all_blocks=1 00:16:22.794 --rc geninfo_unexecuted_blocks=1 00:16:22.794 00:16:22.794 ' 00:16:22.794 19:37:09 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:22.794 19:37:09 -- nvmf/common.sh@7 -- # uname -s 00:16:22.794 19:37:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:22.794 19:37:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:22.794 19:37:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:22.794 19:37:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:22.794 19:37:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:22.794 19:37:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:22.794 19:37:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:22.794 19:37:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:22.794 19:37:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:22.794 19:37:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:22.794 19:37:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:16:22.794 19:37:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:16:22.794 19:37:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:22.794 19:37:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:22.794 19:37:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:22.794 19:37:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:22.794 19:37:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:22.794 19:37:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:22.794 19:37:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:22.794 19:37:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.794 19:37:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.794 19:37:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.794 19:37:09 -- paths/export.sh@5 -- # export PATH 00:16:22.794 19:37:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.794 19:37:09 -- nvmf/common.sh@46 -- # : 0 00:16:22.794 19:37:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:22.794 19:37:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:22.794 19:37:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:22.794 19:37:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:22.794 19:37:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:22.794 19:37:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:22.794 19:37:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:22.794 19:37:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:22.794 19:37:09 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:22.794 19:37:09 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:22.794 19:37:09 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:22.794 19:37:09 -- target/fio.sh@16 -- # nvmftestinit 00:16:22.794 19:37:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:22.794 19:37:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:22.794 19:37:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:22.794 19:37:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:22.794 19:37:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:22.794 19:37:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.794 19:37:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:22.794 19:37:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.794 19:37:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:22.794 19:37:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:22.794 19:37:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:22.794 19:37:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:22.794 19:37:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:22.794 19:37:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:22.794 19:37:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:22.794 19:37:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:22.794 19:37:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:22.794 19:37:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:22.794 19:37:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:22.794 19:37:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:22.794 19:37:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:22.794 19:37:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:22.794 19:37:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:22.794 19:37:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:22.794 19:37:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:22.794 19:37:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:22.794 19:37:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:22.794 19:37:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:22.794 Cannot find device "nvmf_tgt_br" 00:16:22.794 19:37:09 -- nvmf/common.sh@154 -- # true 00:16:22.794 19:37:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:22.794 Cannot find device "nvmf_tgt_br2" 00:16:22.794 19:37:09 -- nvmf/common.sh@155 -- # true 00:16:22.794 19:37:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:22.794 19:37:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:22.794 Cannot find device "nvmf_tgt_br" 00:16:22.794 19:37:09 -- nvmf/common.sh@157 -- # true 00:16:22.794 19:37:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:22.794 Cannot find device "nvmf_tgt_br2" 00:16:22.794 19:37:09 -- nvmf/common.sh@158 -- # true 00:16:22.794 19:37:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:22.794 19:37:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:22.794 19:37:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:22.794 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:22.794 19:37:09 -- nvmf/common.sh@161 -- # true 00:16:22.794 19:37:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:22.794 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:22.794 19:37:09 -- nvmf/common.sh@162 -- # true 00:16:22.794 19:37:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:22.794 19:37:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:22.794 19:37:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:22.794 19:37:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:22.794 19:37:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:23.053 19:37:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:23.053 19:37:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:23.053 19:37:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:23.053 19:37:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:23.053 19:37:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:23.053 19:37:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:23.053 19:37:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:23.053 19:37:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:23.053 19:37:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:23.053 19:37:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:23.053 19:37:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:23.053 19:37:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:23.053 19:37:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:23.053 19:37:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:23.053 19:37:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:23.053 19:37:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:23.053 19:37:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:23.053 19:37:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:23.053 19:37:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:23.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:23.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:16:23.053 00:16:23.053 --- 10.0.0.2 ping statistics --- 00:16:23.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.053 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:16:23.053 19:37:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:23.053 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:23.053 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:16:23.053 00:16:23.053 --- 10.0.0.3 ping statistics --- 00:16:23.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.053 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:16:23.053 19:37:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:23.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:23.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:23.053 00:16:23.053 --- 10.0.0.1 ping statistics --- 00:16:23.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.053 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:23.053 19:37:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:23.053 19:37:09 -- nvmf/common.sh@421 -- # return 0 00:16:23.053 19:37:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:23.053 19:37:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:23.053 19:37:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:23.053 19:37:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:23.053 19:37:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:23.053 19:37:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:23.053 19:37:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:23.053 19:37:09 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:23.053 19:37:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:23.053 19:37:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:23.053 19:37:09 -- common/autotest_common.sh@10 -- # set +x 00:16:23.053 19:37:09 -- nvmf/common.sh@469 -- # nvmfpid=86790 00:16:23.053 19:37:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:23.053 19:37:09 -- nvmf/common.sh@470 -- # waitforlisten 86790 00:16:23.053 19:37:09 -- common/autotest_common.sh@829 -- # '[' -z 86790 ']' 00:16:23.053 19:37:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.053 19:37:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:23.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.053 19:37:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.053 19:37:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:23.053 19:37:09 -- common/autotest_common.sh@10 -- # set +x 00:16:23.053 [2024-12-15 19:37:09.922205] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:23.053 [2024-12-15 19:37:09.922295] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:23.312 [2024-12-15 19:37:10.064100] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:23.312 [2024-12-15 19:37:10.139403] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:23.312 [2024-12-15 19:37:10.139618] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:23.312 [2024-12-15 19:37:10.139636] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:23.312 [2024-12-15 19:37:10.139647] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:23.312 [2024-12-15 19:37:10.139811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:23.312 [2024-12-15 19:37:10.140003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:23.312 [2024-12-15 19:37:10.140512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:23.312 [2024-12-15 19:37:10.140551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.247 19:37:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:24.247 19:37:10 -- common/autotest_common.sh@862 -- # return 0 00:16:24.247 19:37:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:24.247 19:37:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:24.247 19:37:10 -- common/autotest_common.sh@10 -- # set +x 00:16:24.247 19:37:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:24.247 19:37:10 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:24.505 [2024-12-15 19:37:11.259759] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:24.505 19:37:11 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:24.764 19:37:11 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:24.764 19:37:11 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:25.022 19:37:11 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:25.022 19:37:11 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:25.589 19:37:12 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:25.589 19:37:12 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:25.848 19:37:12 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:25.848 19:37:12 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:26.107 19:37:12 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:26.365 19:37:13 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:26.365 19:37:13 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:26.624 19:37:13 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:26.624 19:37:13 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:26.882 19:37:13 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:26.882 19:37:13 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:27.140 19:37:13 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:27.399 19:37:14 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:27.399 19:37:14 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:27.657 19:37:14 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:27.657 19:37:14 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:27.657 19:37:14 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:27.916 [2024-12-15 19:37:14.741109] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:27.916 19:37:14 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:28.174 19:37:14 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:28.432 19:37:15 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 --hostid=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:28.690 19:37:15 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:28.690 19:37:15 -- common/autotest_common.sh@1187 -- # local i=0 00:16:28.690 19:37:15 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:16:28.690 19:37:15 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:16:28.690 19:37:15 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:16:28.690 19:37:15 -- common/autotest_common.sh@1194 -- # sleep 2 00:16:30.591 19:37:17 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:16:30.591 19:37:17 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:16:30.591 19:37:17 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:16:30.591 19:37:17 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:16:30.591 19:37:17 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:16:30.591 19:37:17 -- common/autotest_common.sh@1197 -- # return 0 00:16:30.591 19:37:17 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:30.591 [global] 00:16:30.591 thread=1 00:16:30.591 invalidate=1 00:16:30.591 rw=write 00:16:30.591 time_based=1 00:16:30.591 runtime=1 00:16:30.591 ioengine=libaio 00:16:30.591 direct=1 00:16:30.591 bs=4096 00:16:30.591 iodepth=1 00:16:30.591 norandommap=0 00:16:30.591 numjobs=1 00:16:30.591 00:16:30.591 verify_dump=1 00:16:30.591 verify_backlog=512 00:16:30.591 verify_state_save=0 00:16:30.591 do_verify=1 00:16:30.591 verify=crc32c-intel 00:16:30.591 [job0] 00:16:30.591 filename=/dev/nvme0n1 00:16:30.591 [job1] 00:16:30.591 filename=/dev/nvme0n2 00:16:30.591 [job2] 00:16:30.591 filename=/dev/nvme0n3 00:16:30.591 [job3] 00:16:30.591 filename=/dev/nvme0n4 00:16:30.849 Could not set queue depth (nvme0n1) 00:16:30.849 Could not set queue depth (nvme0n2) 00:16:30.849 Could not set queue depth (nvme0n3) 00:16:30.849 Could not set queue depth (nvme0n4) 00:16:30.849 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:30.849 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:30.849 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:30.849 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:30.849 fio-3.35 00:16:30.849 Starting 4 threads 00:16:32.226 00:16:32.226 job0: (groupid=0, jobs=1): err= 0: pid=87088: Sun Dec 15 19:37:18 2024 00:16:32.226 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:16:32.226 slat (nsec): min=11447, max=61820, avg=14794.13, stdev=3437.75 00:16:32.226 clat (usec): min=109, max=7659, avg=145.66, stdev=146.23 00:16:32.226 lat (usec): min=130, max=7672, avg=160.45, stdev=146.22 00:16:32.226 clat percentiles (usec): 00:16:32.226 | 1.00th=[ 124], 5.00th=[ 128], 10.00th=[ 130], 20.00th=[ 135], 00:16:32.226 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 141], 60.00th=[ 143], 00:16:32.226 | 70.00th=[ 147], 80.00th=[ 151], 90.00th=[ 157], 95.00th=[ 163], 00:16:32.226 | 99.00th=[ 178], 99.50th=[ 186], 99.90th=[ 223], 99.95th=[ 3097], 00:16:32.226 | 99.99th=[ 7635] 00:16:32.226 write: IOPS=3579, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:16:32.226 slat (nsec): min=17548, max=96560, avg=22127.24, stdev=5582.47 00:16:32.226 clat (usec): min=84, max=11195, avg=116.42, stdev=186.07 00:16:32.226 lat (usec): min=109, max=11214, avg=138.54, stdev=186.10 00:16:32.226 clat percentiles (usec): 00:16:32.226 | 1.00th=[ 95], 5.00th=[ 100], 10.00th=[ 102], 20.00th=[ 105], 00:16:32.226 | 30.00th=[ 108], 40.00th=[ 110], 50.00th=[ 112], 60.00th=[ 114], 00:16:32.226 | 70.00th=[ 117], 80.00th=[ 121], 90.00th=[ 127], 95.00th=[ 133], 00:16:32.226 | 99.00th=[ 147], 99.50th=[ 153], 99.90th=[ 184], 99.95th=[ 865], 00:16:32.226 | 99.99th=[11207] 00:16:32.226 bw ( KiB/s): min=15040, max=15040, per=33.78%, avg=15040.00, stdev= 0.00, samples=1 00:16:32.226 iops : min= 3760, max= 3760, avg=3760.00, stdev= 0.00, samples=1 00:16:32.226 lat (usec) : 100=3.29%, 250=96.62%, 500=0.02%, 750=0.02%, 1000=0.02% 00:16:32.226 lat (msec) : 4=0.02%, 10=0.02%, 20=0.02% 00:16:32.226 cpu : usr=2.40%, sys=9.40%, ctx=6662, majf=0, minf=11 00:16:32.226 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:32.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.226 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.226 issued rwts: total=3072,3583,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:32.226 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:32.226 job1: (groupid=0, jobs=1): err= 0: pid=87089: Sun Dec 15 19:37:18 2024 00:16:32.226 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:16:32.226 slat (nsec): min=13269, max=74209, avg=16794.85, stdev=4974.36 00:16:32.226 clat (usec): min=120, max=327, avg=143.12, stdev=12.75 00:16:32.226 lat (usec): min=135, max=342, avg=159.91, stdev=13.41 00:16:32.226 clat percentiles (usec): 00:16:32.226 | 1.00th=[ 124], 5.00th=[ 128], 10.00th=[ 130], 20.00th=[ 133], 00:16:32.226 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 141], 60.00th=[ 145], 00:16:32.226 | 70.00th=[ 147], 80.00th=[ 153], 90.00th=[ 159], 95.00th=[ 167], 00:16:32.226 | 99.00th=[ 182], 99.50th=[ 188], 99.90th=[ 198], 99.95th=[ 210], 00:16:32.226 | 99.99th=[ 330] 00:16:32.226 write: IOPS=3458, BW=13.5MiB/s (14.2MB/s)(13.5MiB/1001msec); 0 zone resets 00:16:32.226 slat (usec): min=14, max=132, avg=25.77, stdev= 7.17 00:16:32.226 clat (usec): min=86, max=417, avg=118.09, stdev=28.48 00:16:32.226 lat (usec): min=113, max=454, avg=143.87, stdev=28.93 00:16:32.226 clat percentiles (usec): 00:16:32.226 | 1.00th=[ 95], 5.00th=[ 99], 10.00th=[ 101], 20.00th=[ 104], 00:16:32.226 | 30.00th=[ 106], 40.00th=[ 109], 50.00th=[ 112], 60.00th=[ 115], 00:16:32.226 | 70.00th=[ 118], 80.00th=[ 123], 90.00th=[ 135], 95.00th=[ 155], 00:16:32.226 | 99.00th=[ 251], 99.50th=[ 277], 99.90th=[ 306], 99.95th=[ 318], 00:16:32.226 | 99.99th=[ 420] 00:16:32.226 bw ( KiB/s): min=14600, max=14600, per=32.79%, avg=14600.00, stdev= 0.00, samples=1 00:16:32.226 iops : min= 3650, max= 3650, avg=3650.00, stdev= 0.00, samples=1 00:16:32.226 lat (usec) : 100=4.47%, 250=94.93%, 500=0.60% 00:16:32.226 cpu : usr=2.40%, sys=10.20%, ctx=6560, majf=0, minf=3 00:16:32.226 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:32.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.226 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.226 issued rwts: total=3072,3462,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:32.226 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:32.226 job2: (groupid=0, jobs=1): err= 0: pid=87090: Sun Dec 15 19:37:18 2024 00:16:32.226 read: IOPS=1861, BW=7445KiB/s (7623kB/s)(7452KiB/1001msec) 00:16:32.226 slat (nsec): min=6303, max=55102, avg=13763.36, stdev=5097.81 00:16:32.226 clat (usec): min=205, max=10545, avg=270.95, stdev=241.85 00:16:32.226 lat (usec): min=223, max=10557, avg=284.71, stdev=241.86 00:16:32.226 clat percentiles (usec): 00:16:32.226 | 1.00th=[ 223], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 243], 00:16:32.226 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 260], 00:16:32.226 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 297], 95.00th=[ 367], 00:16:32.226 | 99.00th=[ 429], 99.50th=[ 449], 99.90th=[ 717], 99.95th=[10552], 00:16:32.226 | 99.99th=[10552] 00:16:32.226 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:32.226 slat (nsec): min=19264, max=99314, avg=24974.27, stdev=6388.78 00:16:32.226 clat (usec): min=116, max=780, avg=201.19, stdev=22.90 00:16:32.226 lat (usec): min=144, max=802, avg=226.17, stdev=22.28 00:16:32.226 clat percentiles (usec): 00:16:32.226 | 1.00th=[ 165], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 188], 00:16:32.226 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 204], 00:16:32.226 | 70.00th=[ 208], 80.00th=[ 215], 90.00th=[ 223], 95.00th=[ 231], 00:16:32.226 | 99.00th=[ 253], 99.50th=[ 262], 99.90th=[ 273], 99.95th=[ 578], 00:16:32.226 | 99.99th=[ 783] 00:16:32.226 bw ( KiB/s): min= 8192, max= 8192, per=18.40%, avg=8192.00, stdev= 0.00, samples=1 00:16:32.226 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:32.226 lat (usec) : 250=70.37%, 500=29.43%, 750=0.15%, 1000=0.03% 00:16:32.226 lat (msec) : 20=0.03% 00:16:32.226 cpu : usr=1.70%, sys=5.60%, ctx=3941, majf=0, minf=11 00:16:32.226 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:32.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.226 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.226 issued rwts: total=1863,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:32.226 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:32.226 job3: (groupid=0, jobs=1): err= 0: pid=87091: Sun Dec 15 19:37:18 2024 00:16:32.226 read: IOPS=1861, BW=7445KiB/s (7623kB/s)(7452KiB/1001msec) 00:16:32.226 slat (usec): min=6, max=104, avg=15.91, stdev= 5.91 00:16:32.226 clat (usec): min=130, max=10728, avg=268.88, stdev=246.24 00:16:32.226 lat (usec): min=152, max=10742, avg=284.78, stdev=246.20 00:16:32.226 clat percentiles (usec): 00:16:32.226 | 1.00th=[ 215], 5.00th=[ 227], 10.00th=[ 235], 20.00th=[ 241], 00:16:32.226 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 258], 00:16:32.226 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 293], 95.00th=[ 359], 00:16:32.226 | 99.00th=[ 429], 99.50th=[ 461], 99.90th=[ 693], 99.95th=[10683], 00:16:32.226 | 99.99th=[10683] 00:16:32.226 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:32.226 slat (nsec): min=18223, max=67987, avg=24886.31, stdev=6170.10 00:16:32.226 clat (usec): min=142, max=757, avg=201.27, stdev=22.34 00:16:32.226 lat (usec): min=179, max=793, avg=226.15, stdev=21.53 00:16:32.226 clat percentiles (usec): 00:16:32.226 | 1.00th=[ 161], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 188], 00:16:32.226 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 204], 00:16:32.226 | 70.00th=[ 208], 80.00th=[ 215], 90.00th=[ 225], 95.00th=[ 235], 00:16:32.226 | 99.00th=[ 251], 99.50th=[ 258], 99.90th=[ 343], 99.95th=[ 469], 00:16:32.226 | 99.99th=[ 758] 00:16:32.226 bw ( KiB/s): min= 8192, max= 8192, per=18.40%, avg=8192.00, stdev= 0.00, samples=1 00:16:32.226 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:32.226 lat (usec) : 250=72.26%, 500=27.51%, 750=0.18%, 1000=0.03% 00:16:32.226 lat (msec) : 20=0.03% 00:16:32.226 cpu : usr=1.80%, sys=5.40%, ctx=3947, majf=0, minf=11 00:16:32.226 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:32.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.226 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.226 issued rwts: total=1863,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:32.227 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:32.227 00:16:32.227 Run status group 0 (all jobs): 00:16:32.227 READ: bw=38.5MiB/s (40.4MB/s), 7445KiB/s-12.0MiB/s (7623kB/s-12.6MB/s), io=38.6MiB (40.4MB), run=1001-1001msec 00:16:32.227 WRITE: bw=43.5MiB/s (45.6MB/s), 8184KiB/s-14.0MiB/s (8380kB/s-14.7MB/s), io=43.5MiB (45.6MB), run=1001-1001msec 00:16:32.227 00:16:32.227 Disk stats (read/write): 00:16:32.227 nvme0n1: ios=2805/3072, merge=0/0, ticks=444/379, in_queue=823, util=87.88% 00:16:32.227 nvme0n2: ios=2676/3072, merge=0/0, ticks=423/387, in_queue=810, util=88.04% 00:16:32.227 nvme0n3: ios=1536/1936, merge=0/0, ticks=386/407, in_queue=793, util=89.20% 00:16:32.227 nvme0n4: ios=1536/1936, merge=0/0, ticks=398/418, in_queue=816, util=89.76% 00:16:32.227 19:37:18 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:32.227 [global] 00:16:32.227 thread=1 00:16:32.227 invalidate=1 00:16:32.227 rw=randwrite 00:16:32.227 time_based=1 00:16:32.227 runtime=1 00:16:32.227 ioengine=libaio 00:16:32.227 direct=1 00:16:32.227 bs=4096 00:16:32.227 iodepth=1 00:16:32.227 norandommap=0 00:16:32.227 numjobs=1 00:16:32.227 00:16:32.227 verify_dump=1 00:16:32.227 verify_backlog=512 00:16:32.227 verify_state_save=0 00:16:32.227 do_verify=1 00:16:32.227 verify=crc32c-intel 00:16:32.227 [job0] 00:16:32.227 filename=/dev/nvme0n1 00:16:32.227 [job1] 00:16:32.227 filename=/dev/nvme0n2 00:16:32.227 [job2] 00:16:32.227 filename=/dev/nvme0n3 00:16:32.227 [job3] 00:16:32.227 filename=/dev/nvme0n4 00:16:32.227 Could not set queue depth (nvme0n1) 00:16:32.227 Could not set queue depth (nvme0n2) 00:16:32.227 Could not set queue depth (nvme0n3) 00:16:32.227 Could not set queue depth (nvme0n4) 00:16:32.227 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:32.227 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:32.227 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:32.227 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:32.227 fio-3.35 00:16:32.227 Starting 4 threads 00:16:33.601 00:16:33.601 job0: (groupid=0, jobs=1): err= 0: pid=87144: Sun Dec 15 19:37:20 2024 00:16:33.601 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:16:33.601 slat (nsec): min=11323, max=54269, avg=13492.85, stdev=2851.43 00:16:33.601 clat (usec): min=119, max=2705, avg=147.71, stdev=49.07 00:16:33.601 lat (usec): min=132, max=2718, avg=161.20, stdev=49.44 00:16:33.601 clat percentiles (usec): 00:16:33.601 | 1.00th=[ 125], 5.00th=[ 131], 10.00th=[ 135], 20.00th=[ 139], 00:16:33.601 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 147], 00:16:33.601 | 70.00th=[ 151], 80.00th=[ 155], 90.00th=[ 161], 95.00th=[ 169], 00:16:33.601 | 99.00th=[ 188], 99.50th=[ 200], 99.90th=[ 457], 99.95th=[ 545], 00:16:33.601 | 99.99th=[ 2704] 00:16:33.601 write: IOPS=3359, BW=13.1MiB/s (13.8MB/s)(13.1MiB/1001msec); 0 zone resets 00:16:33.601 slat (nsec): min=17452, max=85239, avg=21636.84, stdev=6018.76 00:16:33.601 clat (usec): min=80, max=7886, avg=125.56, stdev=167.96 00:16:33.601 lat (usec): min=111, max=7906, avg=147.20, stdev=168.07 00:16:33.601 clat percentiles (usec): 00:16:33.601 | 1.00th=[ 98], 5.00th=[ 103], 10.00th=[ 106], 20.00th=[ 111], 00:16:33.601 | 30.00th=[ 113], 40.00th=[ 116], 50.00th=[ 118], 60.00th=[ 120], 00:16:33.601 | 70.00th=[ 123], 80.00th=[ 127], 90.00th=[ 133], 95.00th=[ 137], 00:16:33.601 | 99.00th=[ 151], 99.50th=[ 159], 99.90th=[ 2442], 99.95th=[ 2540], 00:16:33.601 | 99.99th=[ 7898] 00:16:33.601 bw ( KiB/s): min=14416, max=14416, per=28.64%, avg=14416.00, stdev= 0.00, samples=1 00:16:33.601 iops : min= 3604, max= 3604, avg=3604.00, stdev= 0.00, samples=1 00:16:33.601 lat (usec) : 100=1.06%, 250=98.68%, 500=0.08%, 750=0.02% 00:16:33.601 lat (msec) : 2=0.05%, 4=0.11%, 10=0.02% 00:16:33.601 cpu : usr=2.20%, sys=8.60%, ctx=6435, majf=0, minf=13 00:16:33.601 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:33.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.601 issued rwts: total=3072,3363,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:33.601 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:33.601 job1: (groupid=0, jobs=1): err= 0: pid=87145: Sun Dec 15 19:37:20 2024 00:16:33.601 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:16:33.601 slat (nsec): min=12251, max=73186, avg=16457.18, stdev=5176.81 00:16:33.601 clat (usec): min=118, max=773, avg=149.17, stdev=22.46 00:16:33.601 lat (usec): min=134, max=796, avg=165.62, stdev=22.90 00:16:33.601 clat percentiles (usec): 00:16:33.601 | 1.00th=[ 126], 5.00th=[ 131], 10.00th=[ 135], 20.00th=[ 139], 00:16:33.601 | 30.00th=[ 141], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 149], 00:16:33.601 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 165], 95.00th=[ 174], 00:16:33.601 | 99.00th=[ 196], 99.50th=[ 212], 99.90th=[ 416], 99.95th=[ 627], 00:16:33.601 | 99.99th=[ 775] 00:16:33.601 write: IOPS=3332, BW=13.0MiB/s (13.7MB/s)(13.0MiB/1001msec); 0 zone resets 00:16:33.601 slat (usec): min=18, max=116, avg=24.79, stdev= 6.75 00:16:33.601 clat (usec): min=91, max=217, avg=119.39, stdev=11.52 00:16:33.601 lat (usec): min=111, max=300, avg=144.17, stdev=13.36 00:16:33.601 clat percentiles (usec): 00:16:33.601 | 1.00th=[ 98], 5.00th=[ 103], 10.00th=[ 106], 20.00th=[ 111], 00:16:33.601 | 30.00th=[ 114], 40.00th=[ 117], 50.00th=[ 119], 60.00th=[ 121], 00:16:33.601 | 70.00th=[ 124], 80.00th=[ 128], 90.00th=[ 135], 95.00th=[ 139], 00:16:33.601 | 99.00th=[ 153], 99.50th=[ 159], 99.90th=[ 190], 99.95th=[ 204], 00:16:33.601 | 99.99th=[ 219] 00:16:33.601 bw ( KiB/s): min=13176, max=13176, per=26.18%, avg=13176.00, stdev= 0.00, samples=1 00:16:33.601 iops : min= 3294, max= 3294, avg=3294.00, stdev= 0.00, samples=1 00:16:33.601 lat (usec) : 100=1.34%, 250=98.47%, 500=0.14%, 750=0.03%, 1000=0.02% 00:16:33.601 cpu : usr=2.60%, sys=9.40%, ctx=6408, majf=0, minf=7 00:16:33.601 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:33.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.601 issued rwts: total=3072,3336,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:33.601 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:33.601 job2: (groupid=0, jobs=1): err= 0: pid=87146: Sun Dec 15 19:37:20 2024 00:16:33.601 read: IOPS=2612, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1001msec) 00:16:33.601 slat (nsec): min=9921, max=46090, avg=14690.92, stdev=3156.97 00:16:33.601 clat (usec): min=122, max=2096, avg=170.94, stdev=64.88 00:16:33.601 lat (usec): min=134, max=2114, avg=185.63, stdev=64.82 00:16:33.601 clat percentiles (usec): 00:16:33.601 | 1.00th=[ 128], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 141], 00:16:33.601 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 149], 60.00th=[ 153], 00:16:33.601 | 70.00th=[ 159], 80.00th=[ 221], 90.00th=[ 245], 95.00th=[ 258], 00:16:33.601 | 99.00th=[ 318], 99.50th=[ 334], 99.90th=[ 873], 99.95th=[ 1270], 00:16:33.601 | 99.99th=[ 2089] 00:16:33.601 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:16:33.601 slat (usec): min=10, max=102, avg=21.76, stdev= 4.87 00:16:33.601 clat (usec): min=75, max=619, avg=143.10, stdev=43.67 00:16:33.601 lat (usec): min=120, max=692, avg=164.86, stdev=42.92 00:16:33.601 clat percentiles (usec): 00:16:33.601 | 1.00th=[ 105], 5.00th=[ 111], 10.00th=[ 114], 20.00th=[ 118], 00:16:33.601 | 30.00th=[ 121], 40.00th=[ 124], 50.00th=[ 127], 60.00th=[ 130], 00:16:33.601 | 70.00th=[ 135], 80.00th=[ 149], 90.00th=[ 221], 95.00th=[ 243], 00:16:33.601 | 99.00th=[ 273], 99.50th=[ 289], 99.90th=[ 347], 99.95th=[ 523], 00:16:33.601 | 99.99th=[ 619] 00:16:33.601 bw ( KiB/s): min=13040, max=13040, per=25.91%, avg=13040.00, stdev= 0.00, samples=1 00:16:33.601 iops : min= 3260, max= 3260, avg=3260.00, stdev= 0.00, samples=1 00:16:33.601 lat (usec) : 100=0.09%, 250=94.74%, 500=5.05%, 750=0.07%, 1000=0.02% 00:16:33.601 lat (msec) : 2=0.02%, 4=0.02% 00:16:33.601 cpu : usr=1.90%, sys=8.20%, ctx=5693, majf=0, minf=14 00:16:33.601 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:33.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.601 issued rwts: total=2615,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:33.601 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:33.601 job3: (groupid=0, jobs=1): err= 0: pid=87147: Sun Dec 15 19:37:20 2024 00:16:33.601 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:16:33.601 slat (usec): min=9, max=104, avg=16.21, stdev= 4.90 00:16:33.601 clat (usec): min=121, max=1966, avg=173.33, stdev=57.48 00:16:33.601 lat (usec): min=142, max=1980, avg=189.55, stdev=57.39 00:16:33.601 clat percentiles (usec): 00:16:33.601 | 1.00th=[ 133], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 143], 00:16:33.601 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 159], 00:16:33.601 | 70.00th=[ 167], 80.00th=[ 212], 90.00th=[ 247], 95.00th=[ 262], 00:16:33.601 | 99.00th=[ 326], 99.50th=[ 343], 99.90th=[ 449], 99.95th=[ 603], 00:16:33.601 | 99.99th=[ 1975] 00:16:33.601 write: IOPS=2823, BW=11.0MiB/s (11.6MB/s)(11.0MiB/1001msec); 0 zone resets 00:16:33.601 slat (nsec): min=10640, max=81431, avg=23867.76, stdev=7157.05 00:16:33.601 clat (usec): min=93, max=22564, avg=155.14, stdev=423.77 00:16:33.601 lat (usec): min=119, max=22605, avg=179.01, stdev=423.86 00:16:33.601 clat percentiles (usec): 00:16:33.601 | 1.00th=[ 108], 5.00th=[ 113], 10.00th=[ 116], 20.00th=[ 120], 00:16:33.601 | 30.00th=[ 123], 40.00th=[ 126], 50.00th=[ 130], 60.00th=[ 135], 00:16:33.601 | 70.00th=[ 143], 80.00th=[ 186], 90.00th=[ 223], 95.00th=[ 235], 00:16:33.601 | 99.00th=[ 260], 99.50th=[ 269], 99.90th=[ 416], 99.95th=[ 469], 00:16:33.601 | 99.99th=[22676] 00:16:33.601 bw ( KiB/s): min=12424, max=12424, per=24.68%, avg=12424.00, stdev= 0.00, samples=1 00:16:33.601 iops : min= 3106, max= 3106, avg=3106.00, stdev= 0.00, samples=1 00:16:33.601 lat (usec) : 100=0.06%, 250=94.58%, 500=5.31%, 750=0.02% 00:16:33.601 lat (msec) : 2=0.02%, 50=0.02% 00:16:33.601 cpu : usr=1.80%, sys=8.20%, ctx=5387, majf=0, minf=13 00:16:33.601 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:33.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.601 issued rwts: total=2560,2826,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:33.601 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:33.601 00:16:33.601 Run status group 0 (all jobs): 00:16:33.601 READ: bw=44.2MiB/s (46.3MB/s), 9.99MiB/s-12.0MiB/s (10.5MB/s-12.6MB/s), io=44.2MiB (46.4MB), run=1001-1001msec 00:16:33.601 WRITE: bw=49.2MiB/s (51.5MB/s), 11.0MiB/s-13.1MiB/s (11.6MB/s-13.8MB/s), io=49.2MiB (51.6MB), run=1001-1001msec 00:16:33.601 00:16:33.601 Disk stats (read/write): 00:16:33.601 nvme0n1: ios=2610/3038, merge=0/0, ticks=428/405, in_queue=833, util=88.68% 00:16:33.601 nvme0n2: ios=2595/3016, merge=0/0, ticks=408/393, in_queue=801, util=88.26% 00:16:33.601 nvme0n3: ios=2560/2581, merge=0/0, ticks=434/355, in_queue=789, util=89.28% 00:16:33.602 nvme0n4: ios=2300/2560, merge=0/0, ticks=383/396, in_queue=779, util=89.74% 00:16:33.602 19:37:20 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:33.602 [global] 00:16:33.602 thread=1 00:16:33.602 invalidate=1 00:16:33.602 rw=write 00:16:33.602 time_based=1 00:16:33.602 runtime=1 00:16:33.602 ioengine=libaio 00:16:33.602 direct=1 00:16:33.602 bs=4096 00:16:33.602 iodepth=128 00:16:33.602 norandommap=0 00:16:33.602 numjobs=1 00:16:33.602 00:16:33.602 verify_dump=1 00:16:33.602 verify_backlog=512 00:16:33.602 verify_state_save=0 00:16:33.602 do_verify=1 00:16:33.602 verify=crc32c-intel 00:16:33.602 [job0] 00:16:33.602 filename=/dev/nvme0n1 00:16:33.602 [job1] 00:16:33.602 filename=/dev/nvme0n2 00:16:33.602 [job2] 00:16:33.602 filename=/dev/nvme0n3 00:16:33.602 [job3] 00:16:33.602 filename=/dev/nvme0n4 00:16:33.602 Could not set queue depth (nvme0n1) 00:16:33.602 Could not set queue depth (nvme0n2) 00:16:33.602 Could not set queue depth (nvme0n3) 00:16:33.602 Could not set queue depth (nvme0n4) 00:16:33.602 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:33.602 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:33.602 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:33.602 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:33.602 fio-3.35 00:16:33.602 Starting 4 threads 00:16:34.978 00:16:34.978 job0: (groupid=0, jobs=1): err= 0: pid=87202: Sun Dec 15 19:37:21 2024 00:16:34.978 read: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec) 00:16:34.978 slat (usec): min=7, max=2267, avg=74.03, stdev=305.11 00:16:34.978 clat (usec): min=7629, max=12523, avg=9876.71, stdev=760.59 00:16:34.978 lat (usec): min=7826, max=13950, avg=9950.74, stdev=715.74 00:16:34.978 clat percentiles (usec): 00:16:34.978 | 1.00th=[ 7963], 5.00th=[ 8291], 10.00th=[ 8586], 20.00th=[ 9372], 00:16:34.978 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10159], 00:16:34.978 | 70.00th=[10290], 80.00th=[10421], 90.00th=[10683], 95.00th=[10945], 00:16:34.978 | 99.00th=[11469], 99.50th=[11600], 99.90th=[12125], 99.95th=[12518], 00:16:34.978 | 99.99th=[12518] 00:16:34.978 write: IOPS=6592, BW=25.8MiB/s (27.0MB/s)(25.8MiB/1002msec); 0 zone resets 00:16:34.978 slat (usec): min=10, max=2379, avg=75.98, stdev=307.65 00:16:34.978 clat (usec): min=200, max=12164, avg=9979.93, stdev=1143.36 00:16:34.978 lat (usec): min=2073, max=12188, avg=10055.91, stdev=1137.52 00:16:34.978 clat percentiles (usec): 00:16:34.978 | 1.00th=[ 5669], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 9110], 00:16:34.978 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[10159], 60.00th=[10421], 00:16:34.978 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11207], 95.00th=[11469], 00:16:34.978 | 99.00th=[11731], 99.50th=[11863], 99.90th=[12125], 99.95th=[12125], 00:16:34.978 | 99.99th=[12125] 00:16:34.978 bw ( KiB/s): min=25482, max=25482, per=33.81%, avg=25482.00, stdev= 0.00, samples=1 00:16:34.978 iops : min= 6370, max= 6370, avg=6370.00, stdev= 0.00, samples=1 00:16:34.978 lat (usec) : 250=0.01% 00:16:34.978 lat (msec) : 4=0.28%, 10=47.65%, 20=52.06% 00:16:34.978 cpu : usr=5.89%, sys=15.78%, ctx=952, majf=0, minf=9 00:16:34.978 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:16:34.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.978 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:34.978 issued rwts: total=6144,6606,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:34.978 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:34.978 job1: (groupid=0, jobs=1): err= 0: pid=87203: Sun Dec 15 19:37:21 2024 00:16:34.978 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:16:34.978 slat (usec): min=3, max=5414, avg=153.94, stdev=621.77 00:16:34.978 clat (usec): min=13289, max=25889, avg=19107.46, stdev=2024.89 00:16:34.978 lat (usec): min=13306, max=26624, avg=19261.40, stdev=1986.53 00:16:34.978 clat percentiles (usec): 00:16:34.978 | 1.00th=[14615], 5.00th=[15795], 10.00th=[16450], 20.00th=[17171], 00:16:34.978 | 30.00th=[17957], 40.00th=[19006], 50.00th=[19268], 60.00th=[19792], 00:16:34.978 | 70.00th=[20055], 80.00th=[20579], 90.00th=[21365], 95.00th=[22414], 00:16:34.978 | 99.00th=[24511], 99.50th=[25297], 99.90th=[25560], 99.95th=[25822], 00:16:34.978 | 99.99th=[25822] 00:16:34.978 write: IOPS=3240, BW=12.7MiB/s (13.3MB/s)(12.7MiB/1001msec); 0 zone resets 00:16:34.978 slat (usec): min=4, max=4965, avg=155.21, stdev=458.17 00:16:34.978 clat (usec): min=610, max=27369, avg=20825.31, stdev=2807.80 00:16:34.978 lat (usec): min=648, max=27444, avg=20980.53, stdev=2788.99 00:16:34.978 clat percentiles (usec): 00:16:34.978 | 1.00th=[ 5014], 5.00th=[16319], 10.00th=[17957], 20.00th=[20317], 00:16:34.978 | 30.00th=[20841], 40.00th=[21103], 50.00th=[21103], 60.00th=[21365], 00:16:34.978 | 70.00th=[21627], 80.00th=[21890], 90.00th=[23462], 95.00th=[24773], 00:16:34.978 | 99.00th=[26346], 99.50th=[26608], 99.90th=[26870], 99.95th=[27395], 00:16:34.978 | 99.99th=[27395] 00:16:34.978 bw ( KiB/s): min=12263, max=12263, per=16.27%, avg=12263.00, stdev= 0.00, samples=1 00:16:34.978 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:16:34.978 lat (usec) : 750=0.03% 00:16:34.978 lat (msec) : 10=0.62%, 20=41.69%, 50=57.66% 00:16:34.978 cpu : usr=3.20%, sys=7.70%, ctx=1260, majf=0, minf=7 00:16:34.978 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:16:34.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.978 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:34.978 issued rwts: total=3072,3244,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:34.978 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:34.978 job2: (groupid=0, jobs=1): err= 0: pid=87204: Sun Dec 15 19:37:21 2024 00:16:34.978 read: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec) 00:16:34.978 slat (usec): min=2, max=6113, avg=152.69, stdev=616.23 00:16:34.978 clat (usec): min=14841, max=24794, avg=19241.84, stdev=1749.95 00:16:34.978 lat (usec): min=15117, max=24812, avg=19394.53, stdev=1707.87 00:16:34.978 clat percentiles (usec): 00:16:34.978 | 1.00th=[15664], 5.00th=[16319], 10.00th=[16909], 20.00th=[17695], 00:16:34.978 | 30.00th=[18482], 40.00th=[19006], 50.00th=[19268], 60.00th=[19792], 00:16:34.978 | 70.00th=[20055], 80.00th=[20579], 90.00th=[21365], 95.00th=[22414], 00:16:34.978 | 99.00th=[23462], 99.50th=[23725], 99.90th=[24249], 99.95th=[24249], 00:16:34.978 | 99.99th=[24773] 00:16:34.978 write: IOPS=3232, BW=12.6MiB/s (13.2MB/s)(12.7MiB/1002msec); 0 zone resets 00:16:34.978 slat (usec): min=5, max=4997, avg=157.26, stdev=465.87 00:16:34.978 clat (usec): min=195, max=25700, avg=20684.58, stdev=2845.22 00:16:34.978 lat (usec): min=2194, max=25719, avg=20841.84, stdev=2824.58 00:16:34.978 clat percentiles (usec): 00:16:34.978 | 1.00th=[ 3130], 5.00th=[16909], 10.00th=[18744], 20.00th=[20317], 00:16:34.978 | 30.00th=[20841], 40.00th=[21103], 50.00th=[21365], 60.00th=[21365], 00:16:34.978 | 70.00th=[21627], 80.00th=[21890], 90.00th=[22152], 95.00th=[23200], 00:16:34.978 | 99.00th=[24773], 99.50th=[25035], 99.90th=[25297], 99.95th=[25560], 00:16:34.978 | 99.99th=[25822] 00:16:34.978 bw ( KiB/s): min=12312, max=12576, per=16.51%, avg=12444.00, stdev=186.68, samples=2 00:16:34.978 iops : min= 3078, max= 3144, avg=3111.00, stdev=46.67, samples=2 00:16:34.978 lat (usec) : 250=0.02% 00:16:34.978 lat (msec) : 4=0.57%, 10=0.51%, 20=40.58%, 50=58.33% 00:16:34.978 cpu : usr=2.00%, sys=9.29%, ctx=1234, majf=0, minf=14 00:16:34.978 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:16:34.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.978 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:34.978 issued rwts: total=3072,3239,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:34.978 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:34.978 job3: (groupid=0, jobs=1): err= 0: pid=87205: Sun Dec 15 19:37:21 2024 00:16:34.978 read: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec) 00:16:34.978 slat (usec): min=7, max=4717, avg=84.41, stdev=398.15 00:16:34.978 clat (usec): min=7792, max=16041, avg=11050.90, stdev=1255.76 00:16:34.978 lat (usec): min=7882, max=16065, avg=11135.32, stdev=1242.03 00:16:34.978 clat percentiles (usec): 00:16:34.978 | 1.00th=[ 8160], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[10159], 00:16:34.978 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11207], 60.00th=[11469], 00:16:34.978 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12256], 95.00th=[12649], 00:16:34.978 | 99.00th=[15270], 99.50th=[15533], 99.90th=[15664], 99.95th=[15664], 00:16:34.978 | 99.99th=[16057] 00:16:34.978 write: IOPS=5805, BW=22.7MiB/s (23.8MB/s)(22.8MiB/1004msec); 0 zone resets 00:16:34.978 slat (usec): min=11, max=3414, avg=83.04, stdev=353.53 00:16:34.978 clat (usec): min=3157, max=14307, avg=11083.89, stdev=1249.11 00:16:34.978 lat (usec): min=3173, max=14350, avg=11166.94, stdev=1217.48 00:16:34.978 clat percentiles (usec): 00:16:34.978 | 1.00th=[ 7701], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[10421], 00:16:34.978 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11469], 60.00th=[11731], 00:16:34.978 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12125], 95.00th=[12387], 00:16:34.978 | 99.00th=[13566], 99.50th=[13698], 99.90th=[14222], 99.95th=[14353], 00:16:34.978 | 99.99th=[14353] 00:16:34.978 bw ( KiB/s): min=21040, max=24526, per=30.23%, avg=22783.00, stdev=2464.97, samples=2 00:16:34.978 iops : min= 5260, max= 6131, avg=5695.50, stdev=615.89, samples=2 00:16:34.978 lat (msec) : 4=0.10%, 10=18.05%, 20=81.84% 00:16:34.978 cpu : usr=4.99%, sys=15.05%, ctx=753, majf=0, minf=6 00:16:34.978 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:16:34.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.978 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:34.978 issued rwts: total=5632,5829,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:34.979 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:34.979 00:16:34.979 Run status group 0 (all jobs): 00:16:34.979 READ: bw=69.7MiB/s (73.1MB/s), 12.0MiB/s-24.0MiB/s (12.6MB/s-25.1MB/s), io=70.0MiB (73.4MB), run=1001-1004msec 00:16:34.979 WRITE: bw=73.6MiB/s (77.2MB/s), 12.6MiB/s-25.8MiB/s (13.2MB/s-27.0MB/s), io=73.9MiB (77.5MB), run=1001-1004msec 00:16:34.979 00:16:34.979 Disk stats (read/write): 00:16:34.979 nvme0n1: ios=5433/5632, merge=0/0, ticks=12298/11477, in_queue=23775, util=88.47% 00:16:34.979 nvme0n2: ios=2604/2894, merge=0/0, ticks=11603/14188, in_queue=25791, util=88.48% 00:16:34.979 nvme0n3: ios=2560/2865, merge=0/0, ticks=11410/14287, in_queue=25697, util=88.81% 00:16:34.979 nvme0n4: ios=4790/5120, merge=0/0, ticks=15940/16299, in_queue=32239, util=89.75% 00:16:34.979 19:37:21 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:34.979 [global] 00:16:34.979 thread=1 00:16:34.979 invalidate=1 00:16:34.979 rw=randwrite 00:16:34.979 time_based=1 00:16:34.979 runtime=1 00:16:34.979 ioengine=libaio 00:16:34.979 direct=1 00:16:34.979 bs=4096 00:16:34.979 iodepth=128 00:16:34.979 norandommap=0 00:16:34.979 numjobs=1 00:16:34.979 00:16:34.979 verify_dump=1 00:16:34.979 verify_backlog=512 00:16:34.979 verify_state_save=0 00:16:34.979 do_verify=1 00:16:34.979 verify=crc32c-intel 00:16:34.979 [job0] 00:16:34.979 filename=/dev/nvme0n1 00:16:34.979 [job1] 00:16:34.979 filename=/dev/nvme0n2 00:16:34.979 [job2] 00:16:34.979 filename=/dev/nvme0n3 00:16:34.979 [job3] 00:16:34.979 filename=/dev/nvme0n4 00:16:34.979 Could not set queue depth (nvme0n1) 00:16:34.979 Could not set queue depth (nvme0n2) 00:16:34.979 Could not set queue depth (nvme0n3) 00:16:34.979 Could not set queue depth (nvme0n4) 00:16:34.979 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:34.979 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:34.979 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:34.979 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:34.979 fio-3.35 00:16:34.979 Starting 4 threads 00:16:36.355 00:16:36.355 job0: (groupid=0, jobs=1): err= 0: pid=87264: Sun Dec 15 19:37:22 2024 00:16:36.355 read: IOPS=4514, BW=17.6MiB/s (18.5MB/s)(17.7MiB/1002msec) 00:16:36.355 slat (usec): min=7, max=8182, avg=100.47, stdev=521.18 00:16:36.355 clat (usec): min=760, max=23336, avg=12819.28, stdev=3216.16 00:16:36.355 lat (usec): min=2953, max=24583, avg=12919.75, stdev=3250.62 00:16:36.355 clat percentiles (usec): 00:16:36.355 | 1.00th=[ 6259], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9503], 00:16:36.355 | 30.00th=[10290], 40.00th=[11469], 50.00th=[13304], 60.00th=[13960], 00:16:36.355 | 70.00th=[14615], 80.00th=[15270], 90.00th=[17171], 95.00th=[17695], 00:16:36.355 | 99.00th=[20841], 99.50th=[21365], 99.90th=[22938], 99.95th=[23200], 00:16:36.355 | 99.99th=[23462] 00:16:36.355 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:16:36.355 slat (usec): min=10, max=6156, avg=110.99, stdev=518.65 00:16:36.355 clat (usec): min=6077, max=32074, avg=14927.07, stdev=5977.30 00:16:36.355 lat (usec): min=6101, max=32091, avg=15038.06, stdev=6019.86 00:16:36.355 clat percentiles (usec): 00:16:36.355 | 1.00th=[ 6587], 5.00th=[ 7963], 10.00th=[ 9765], 20.00th=[10421], 00:16:36.355 | 30.00th=[10683], 40.00th=[11600], 50.00th=[12256], 60.00th=[14746], 00:16:36.355 | 70.00th=[17433], 80.00th=[21103], 90.00th=[23725], 95.00th=[26870], 00:16:36.355 | 99.00th=[31851], 99.50th=[31851], 99.90th=[32113], 99.95th=[32113], 00:16:36.355 | 99.99th=[32113] 00:16:36.355 bw ( KiB/s): min=16368, max=20537, per=24.40%, avg=18452.50, stdev=2947.93, samples=2 00:16:36.355 iops : min= 4092, max= 5134, avg=4613.00, stdev=736.81, samples=2 00:16:36.355 lat (usec) : 1000=0.01% 00:16:36.355 lat (msec) : 4=0.24%, 10=19.43%, 20=66.47%, 50=13.85% 00:16:36.355 cpu : usr=3.10%, sys=13.79%, ctx=467, majf=0, minf=1 00:16:36.355 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:36.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:36.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:36.355 issued rwts: total=4524,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:36.355 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:36.355 job1: (groupid=0, jobs=1): err= 0: pid=87265: Sun Dec 15 19:37:22 2024 00:16:36.355 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:16:36.355 slat (usec): min=7, max=10599, avg=83.29, stdev=499.13 00:16:36.355 clat (usec): min=6171, max=27475, avg=10897.29, stdev=3111.61 00:16:36.355 lat (usec): min=6186, max=27520, avg=10980.58, stdev=3153.20 00:16:36.355 clat percentiles (usec): 00:16:36.355 | 1.00th=[ 6652], 5.00th=[ 8094], 10.00th=[ 8848], 20.00th=[ 9241], 00:16:36.355 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[10159], 00:16:36.355 | 70.00th=[10552], 80.00th=[11469], 90.00th=[14222], 95.00th=[18482], 00:16:36.355 | 99.00th=[23200], 99.50th=[23725], 99.90th=[23725], 99.95th=[23725], 00:16:36.355 | 99.99th=[27395] 00:16:36.355 write: IOPS=5977, BW=23.3MiB/s (24.5MB/s)(23.4MiB/1002msec); 0 zone resets 00:16:36.355 slat (usec): min=10, max=8799, avg=81.80, stdev=469.82 00:16:36.355 clat (usec): min=301, max=29442, avg=10912.15, stdev=3162.92 00:16:36.355 lat (usec): min=3539, max=29474, avg=10993.96, stdev=3182.78 00:16:36.355 clat percentiles (usec): 00:16:36.355 | 1.00th=[ 5080], 5.00th=[ 6980], 10.00th=[ 8979], 20.00th=[ 9634], 00:16:36.355 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10159], 60.00th=[10290], 00:16:36.355 | 70.00th=[10421], 80.00th=[10945], 90.00th=[13960], 95.00th=[20055], 00:16:36.355 | 99.00th=[21103], 99.50th=[22414], 99.90th=[27657], 99.95th=[28181], 00:16:36.355 | 99.99th=[29492] 00:16:36.355 bw ( KiB/s): min=20480, max=26416, per=31.00%, avg=23448.00, stdev=4197.39, samples=2 00:16:36.355 iops : min= 5120, max= 6604, avg=5862.00, stdev=1049.35, samples=2 00:16:36.355 lat (usec) : 500=0.01% 00:16:36.355 lat (msec) : 4=0.17%, 10=45.81%, 20=50.17%, 50=3.85% 00:16:36.355 cpu : usr=5.79%, sys=13.79%, ctx=460, majf=0, minf=4 00:16:36.355 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:16:36.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:36.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:36.355 issued rwts: total=5632,5989,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:36.355 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:36.355 job2: (groupid=0, jobs=1): err= 0: pid=87266: Sun Dec 15 19:37:22 2024 00:16:36.355 read: IOPS=2342, BW=9372KiB/s (9597kB/s)(9400KiB/1003msec) 00:16:36.355 slat (usec): min=5, max=9525, avg=207.40, stdev=935.76 00:16:36.355 clat (usec): min=296, max=48559, avg=27409.60, stdev=9097.01 00:16:36.355 lat (usec): min=5983, max=48575, avg=27617.00, stdev=9118.39 00:16:36.355 clat percentiles (usec): 00:16:36.355 | 1.00th=[11600], 5.00th=[17695], 10.00th=[19792], 20.00th=[20579], 00:16:36.355 | 30.00th=[21103], 40.00th=[21365], 50.00th=[23200], 60.00th=[26608], 00:16:36.355 | 70.00th=[32113], 80.00th=[36963], 90.00th=[43254], 95.00th=[45876], 00:16:36.355 | 99.00th=[47973], 99.50th=[48497], 99.90th=[48497], 99.95th=[48497], 00:16:36.355 | 99.99th=[48497] 00:16:36.355 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:16:36.355 slat (usec): min=10, max=9056, avg=192.76, stdev=957.95 00:16:36.355 clat (usec): min=8517, max=48033, avg=24226.60, stdev=7221.55 00:16:36.355 lat (usec): min=8542, max=48050, avg=24419.36, stdev=7247.82 00:16:36.355 clat percentiles (usec): 00:16:36.355 | 1.00th=[11994], 5.00th=[15664], 10.00th=[15926], 20.00th=[18220], 00:16:36.355 | 30.00th=[19792], 40.00th=[20317], 50.00th=[22938], 60.00th=[26870], 00:16:36.356 | 70.00th=[28181], 80.00th=[29754], 90.00th=[33817], 95.00th=[39060], 00:16:36.356 | 99.00th=[44303], 99.50th=[44303], 99.90th=[47973], 99.95th=[47973], 00:16:36.356 | 99.99th=[47973] 00:16:36.356 bw ( KiB/s): min= 9360, max=11142, per=13.55%, avg=10251.00, stdev=1260.06, samples=2 00:16:36.356 iops : min= 2340, max= 2785, avg=2562.50, stdev=314.66, samples=2 00:16:36.356 lat (usec) : 500=0.02% 00:16:36.356 lat (msec) : 10=0.75%, 20=22.99%, 50=76.23% 00:16:36.356 cpu : usr=2.69%, sys=7.09%, ctx=219, majf=0, minf=11 00:16:36.356 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:16:36.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:36.356 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:36.356 issued rwts: total=2350,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:36.356 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:36.356 job3: (groupid=0, jobs=1): err= 0: pid=87267: Sun Dec 15 19:37:22 2024 00:16:36.356 read: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec) 00:16:36.356 slat (usec): min=5, max=10167, avg=89.85, stdev=626.31 00:16:36.356 clat (usec): min=3557, max=21708, avg=11730.07, stdev=2772.95 00:16:36.356 lat (usec): min=3568, max=21724, avg=11819.92, stdev=2810.46 00:16:36.356 clat percentiles (usec): 00:16:36.356 | 1.00th=[ 5211], 5.00th=[ 8586], 10.00th=[ 9503], 20.00th=[10028], 00:16:36.356 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10945], 60.00th=[11207], 00:16:36.356 | 70.00th=[12125], 80.00th=[13435], 90.00th=[15533], 95.00th=[17957], 00:16:36.356 | 99.00th=[20579], 99.50th=[20841], 99.90th=[21103], 99.95th=[21627], 00:16:36.356 | 99.99th=[21627] 00:16:36.356 write: IOPS=5829, BW=22.8MiB/s (23.9MB/s)(22.9MiB/1006msec); 0 zone resets 00:16:36.356 slat (usec): min=4, max=8588, avg=77.64, stdev=433.97 00:16:36.356 clat (usec): min=838, max=21661, avg=10492.75, stdev=2234.13 00:16:36.356 lat (usec): min=2271, max=21681, avg=10570.39, stdev=2275.84 00:16:36.356 clat percentiles (usec): 00:16:36.356 | 1.00th=[ 3818], 5.00th=[ 5407], 10.00th=[ 6783], 20.00th=[ 9241], 00:16:36.356 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11207], 60.00th=[11469], 00:16:36.356 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12256], 95.00th=[12256], 00:16:36.356 | 99.00th=[12649], 99.50th=[17433], 99.90th=[21103], 99.95th=[21103], 00:16:36.356 | 99.99th=[21627] 00:16:36.356 bw ( KiB/s): min=21712, max=24176, per=30.34%, avg=22944.00, stdev=1742.31, samples=2 00:16:36.356 iops : min= 5428, max= 6044, avg=5736.00, stdev=435.58, samples=2 00:16:36.356 lat (usec) : 1000=0.01% 00:16:36.356 lat (msec) : 4=0.87%, 10=21.08%, 20=77.04%, 50=1.00% 00:16:36.356 cpu : usr=5.87%, sys=13.23%, ctx=671, majf=0, minf=2 00:16:36.356 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:16:36.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:36.356 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:36.356 issued rwts: total=5632,5864,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:36.356 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:36.356 00:16:36.356 Run status group 0 (all jobs): 00:16:36.356 READ: bw=70.4MiB/s (73.8MB/s), 9372KiB/s-22.0MiB/s (9597kB/s-23.0MB/s), io=70.9MiB (74.3MB), run=1002-1006msec 00:16:36.356 WRITE: bw=73.9MiB/s (77.4MB/s), 9.97MiB/s-23.3MiB/s (10.5MB/s-24.5MB/s), io=74.3MiB (77.9MB), run=1002-1006msec 00:16:36.356 00:16:36.356 Disk stats (read/write): 00:16:36.356 nvme0n1: ios=3736/4096, merge=0/0, ticks=22022/28018, in_queue=50040, util=88.49% 00:16:36.356 nvme0n2: ios=4714/5120, merge=0/0, ticks=23707/24064, in_queue=47771, util=89.29% 00:16:36.356 nvme0n3: ios=2048/2381, merge=0/0, ticks=15094/18668, in_queue=33762, util=89.08% 00:16:36.356 nvme0n4: ios=4723/5120, merge=0/0, ticks=51190/51357, in_queue=102547, util=89.75% 00:16:36.356 19:37:22 -- target/fio.sh@55 -- # sync 00:16:36.356 19:37:22 -- target/fio.sh@59 -- # fio_pid=87280 00:16:36.356 19:37:22 -- target/fio.sh@61 -- # sleep 3 00:16:36.356 19:37:22 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:36.356 [global] 00:16:36.356 thread=1 00:16:36.356 invalidate=1 00:16:36.356 rw=read 00:16:36.356 time_based=1 00:16:36.356 runtime=10 00:16:36.356 ioengine=libaio 00:16:36.356 direct=1 00:16:36.356 bs=4096 00:16:36.356 iodepth=1 00:16:36.356 norandommap=1 00:16:36.356 numjobs=1 00:16:36.356 00:16:36.356 [job0] 00:16:36.356 filename=/dev/nvme0n1 00:16:36.356 [job1] 00:16:36.356 filename=/dev/nvme0n2 00:16:36.356 [job2] 00:16:36.356 filename=/dev/nvme0n3 00:16:36.356 [job3] 00:16:36.356 filename=/dev/nvme0n4 00:16:36.356 Could not set queue depth (nvme0n1) 00:16:36.356 Could not set queue depth (nvme0n2) 00:16:36.356 Could not set queue depth (nvme0n3) 00:16:36.356 Could not set queue depth (nvme0n4) 00:16:36.356 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:36.356 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:36.356 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:36.356 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:36.356 fio-3.35 00:16:36.356 Starting 4 threads 00:16:39.637 19:37:25 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:39.637 fio: pid=87327, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:39.637 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=46166016, buflen=4096 00:16:39.637 19:37:26 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:39.637 fio: pid=87326, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:39.637 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=50147328, buflen=4096 00:16:39.637 19:37:26 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:39.637 19:37:26 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:39.895 fio: pid=87320, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:39.895 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=64516096, buflen=4096 00:16:40.153 19:37:26 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:40.153 19:37:26 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:40.413 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=8429568, buflen=4096 00:16:40.413 fio: pid=87321, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:40.413 00:16:40.413 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87320: Sun Dec 15 19:37:27 2024 00:16:40.413 read: IOPS=4518, BW=17.6MiB/s (18.5MB/s)(61.5MiB/3486msec) 00:16:40.413 slat (usec): min=7, max=13408, avg=16.54, stdev=144.22 00:16:40.413 clat (usec): min=122, max=3653, avg=203.45, stdev=74.85 00:16:40.413 lat (usec): min=135, max=13673, avg=220.00, stdev=162.75 00:16:40.413 clat percentiles (usec): 00:16:40.413 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 151], 00:16:40.413 | 30.00th=[ 159], 40.00th=[ 176], 50.00th=[ 215], 60.00th=[ 227], 00:16:40.413 | 70.00th=[ 235], 80.00th=[ 243], 90.00th=[ 255], 95.00th=[ 265], 00:16:40.413 | 99.00th=[ 293], 99.50th=[ 355], 99.90th=[ 873], 99.95th=[ 1401], 00:16:40.413 | 99.99th=[ 3064] 00:16:40.413 bw ( KiB/s): min=15346, max=23976, per=30.15%, avg=18472.33, stdev=4026.15, samples=6 00:16:40.413 iops : min= 3836, max= 5994, avg=4618.00, stdev=1006.61, samples=6 00:16:40.413 lat (usec) : 250=86.66%, 500=13.06%, 750=0.13%, 1000=0.05% 00:16:40.413 lat (msec) : 2=0.06%, 4=0.03% 00:16:40.413 cpu : usr=1.09%, sys=5.51%, ctx=15783, majf=0, minf=1 00:16:40.413 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:40.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:40.413 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:40.413 issued rwts: total=15752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:40.413 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:40.413 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87321: Sun Dec 15 19:37:27 2024 00:16:40.413 read: IOPS=4895, BW=19.1MiB/s (20.1MB/s)(72.0MiB/3767msec) 00:16:40.413 slat (usec): min=9, max=12839, avg=18.04, stdev=187.64 00:16:40.413 clat (usec): min=106, max=4037, avg=184.97, stdev=85.29 00:16:40.413 lat (usec): min=125, max=15455, avg=203.00, stdev=214.55 00:16:40.413 clat percentiles (usec): 00:16:40.414 | 1.00th=[ 119], 5.00th=[ 124], 10.00th=[ 128], 20.00th=[ 137], 00:16:40.414 | 30.00th=[ 141], 40.00th=[ 147], 50.00th=[ 159], 60.00th=[ 215], 00:16:40.414 | 70.00th=[ 229], 80.00th=[ 239], 90.00th=[ 247], 95.00th=[ 255], 00:16:40.414 | 99.00th=[ 277], 99.50th=[ 297], 99.90th=[ 635], 99.95th=[ 865], 00:16:40.414 | 99.99th=[ 3916] 00:16:40.414 bw ( KiB/s): min=15648, max=25240, per=31.44%, avg=19268.57, stdev=4175.42, samples=7 00:16:40.414 iops : min= 3912, max= 6310, avg=4817.14, stdev=1043.86, samples=7 00:16:40.414 lat (usec) : 250=92.07%, 500=7.78%, 750=0.08%, 1000=0.02% 00:16:40.414 lat (msec) : 2=0.01%, 4=0.04%, 10=0.01% 00:16:40.414 cpu : usr=1.43%, sys=5.87%, ctx=18476, majf=0, minf=2 00:16:40.414 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:40.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:40.414 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:40.414 issued rwts: total=18443,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:40.414 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:40.414 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87326: Sun Dec 15 19:37:27 2024 00:16:40.414 read: IOPS=3820, BW=14.9MiB/s (15.6MB/s)(47.8MiB/3205msec) 00:16:40.414 slat (usec): min=7, max=8748, avg=15.25, stdev=103.22 00:16:40.414 clat (usec): min=140, max=5931, avg=245.31, stdev=71.51 00:16:40.414 lat (usec): min=153, max=8898, avg=260.56, stdev=125.04 00:16:40.414 clat percentiles (usec): 00:16:40.414 | 1.00th=[ 196], 5.00th=[ 212], 10.00th=[ 221], 20.00th=[ 227], 00:16:40.414 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 247], 00:16:40.414 | 70.00th=[ 251], 80.00th=[ 260], 90.00th=[ 269], 95.00th=[ 277], 00:16:40.414 | 99.00th=[ 310], 99.50th=[ 347], 99.90th=[ 865], 99.95th=[ 1205], 00:16:40.414 | 99.99th=[ 2769] 00:16:40.414 bw ( KiB/s): min=14976, max=15952, per=25.03%, avg=15337.67, stdev=365.30, samples=6 00:16:40.414 iops : min= 3744, max= 3988, avg=3834.33, stdev=91.29, samples=6 00:16:40.414 lat (usec) : 250=67.04%, 500=32.69%, 750=0.11%, 1000=0.07% 00:16:40.414 lat (msec) : 2=0.06%, 4=0.02%, 10=0.01% 00:16:40.414 cpu : usr=1.09%, sys=4.34%, ctx=12260, majf=0, minf=1 00:16:40.414 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:40.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:40.414 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:40.414 issued rwts: total=12244,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:40.414 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:40.414 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87327: Sun Dec 15 19:37:27 2024 00:16:40.414 read: IOPS=3853, BW=15.1MiB/s (15.8MB/s)(44.0MiB/2925msec) 00:16:40.414 slat (nsec): min=9493, max=80785, avg=14512.80, stdev=4925.64 00:16:40.414 clat (usec): min=174, max=3950, avg=243.63, stdev=46.03 00:16:40.414 lat (usec): min=190, max=3964, avg=258.14, stdev=45.92 00:16:40.414 clat percentiles (usec): 00:16:40.414 | 1.00th=[ 206], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 229], 00:16:40.414 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 245], 00:16:40.414 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 277], 00:16:40.414 | 99.00th=[ 297], 99.50th=[ 310], 99.90th=[ 371], 99.95th=[ 537], 00:16:40.414 | 99.99th=[ 2606] 00:16:40.414 bw ( KiB/s): min=14976, max=15936, per=25.10%, avg=15379.20, stdev=417.14, samples=5 00:16:40.414 iops : min= 3744, max= 3984, avg=3844.80, stdev=104.28, samples=5 00:16:40.414 lat (usec) : 250=69.85%, 500=30.09%, 750=0.03%, 1000=0.01% 00:16:40.414 lat (msec) : 4=0.02% 00:16:40.414 cpu : usr=1.16%, sys=4.55%, ctx=11286, majf=0, minf=2 00:16:40.414 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:40.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:40.414 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:40.414 issued rwts: total=11272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:40.414 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:40.414 00:16:40.414 Run status group 0 (all jobs): 00:16:40.414 READ: bw=59.8MiB/s (62.7MB/s), 14.9MiB/s-19.1MiB/s (15.6MB/s-20.1MB/s), io=225MiB (236MB), run=2925-3767msec 00:16:40.414 00:16:40.414 Disk stats (read/write): 00:16:40.414 nvme0n1: ios=15240/0, merge=0/0, ticks=3130/0, in_queue=3130, util=95.19% 00:16:40.414 nvme0n2: ios=17410/0, merge=0/0, ticks=3291/0, in_queue=3291, util=95.02% 00:16:40.414 nvme0n3: ios=11893/0, merge=0/0, ticks=2906/0, in_queue=2906, util=96.30% 00:16:40.414 nvme0n4: ios=11047/0, merge=0/0, ticks=2715/0, in_queue=2715, util=96.76% 00:16:40.414 19:37:27 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:40.414 19:37:27 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:40.685 19:37:27 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:40.685 19:37:27 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:40.963 19:37:27 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:40.963 19:37:27 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:41.221 19:37:27 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:41.221 19:37:27 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:41.480 19:37:28 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:41.480 19:37:28 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:41.738 19:37:28 -- target/fio.sh@69 -- # fio_status=0 00:16:41.738 19:37:28 -- target/fio.sh@70 -- # wait 87280 00:16:41.738 19:37:28 -- target/fio.sh@70 -- # fio_status=4 00:16:41.738 19:37:28 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:41.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:41.738 19:37:28 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:41.738 19:37:28 -- common/autotest_common.sh@1208 -- # local i=0 00:16:41.738 19:37:28 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:16:41.738 19:37:28 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:41.738 19:37:28 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:16:41.738 19:37:28 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:41.738 nvmf hotplug test: fio failed as expected 00:16:41.738 19:37:28 -- common/autotest_common.sh@1220 -- # return 0 00:16:41.738 19:37:28 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:41.738 19:37:28 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:41.738 19:37:28 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:41.996 19:37:28 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:41.996 19:37:28 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:41.996 19:37:28 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:41.996 19:37:28 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:41.996 19:37:28 -- target/fio.sh@91 -- # nvmftestfini 00:16:41.996 19:37:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:41.996 19:37:28 -- nvmf/common.sh@116 -- # sync 00:16:41.996 19:37:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:41.996 19:37:28 -- nvmf/common.sh@119 -- # set +e 00:16:41.996 19:37:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:41.996 19:37:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:41.996 rmmod nvme_tcp 00:16:41.996 rmmod nvme_fabrics 00:16:41.996 rmmod nvme_keyring 00:16:42.254 19:37:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:42.254 19:37:28 -- nvmf/common.sh@123 -- # set -e 00:16:42.254 19:37:28 -- nvmf/common.sh@124 -- # return 0 00:16:42.254 19:37:28 -- nvmf/common.sh@477 -- # '[' -n 86790 ']' 00:16:42.254 19:37:28 -- nvmf/common.sh@478 -- # killprocess 86790 00:16:42.254 19:37:28 -- common/autotest_common.sh@936 -- # '[' -z 86790 ']' 00:16:42.254 19:37:28 -- common/autotest_common.sh@940 -- # kill -0 86790 00:16:42.254 19:37:28 -- common/autotest_common.sh@941 -- # uname 00:16:42.254 19:37:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:42.254 19:37:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86790 00:16:42.254 killing process with pid 86790 00:16:42.254 19:37:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:42.254 19:37:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:42.254 19:37:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86790' 00:16:42.254 19:37:28 -- common/autotest_common.sh@955 -- # kill 86790 00:16:42.254 19:37:28 -- common/autotest_common.sh@960 -- # wait 86790 00:16:42.512 19:37:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:42.512 19:37:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:42.512 19:37:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:42.512 19:37:29 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:42.512 19:37:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:42.512 19:37:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.512 19:37:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:42.512 19:37:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:42.512 19:37:29 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:42.512 ************************************ 00:16:42.512 END TEST nvmf_fio_target 00:16:42.512 ************************************ 00:16:42.512 00:16:42.512 real 0m19.898s 00:16:42.512 user 1m15.865s 00:16:42.512 sys 0m9.274s 00:16:42.512 19:37:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:42.512 19:37:29 -- common/autotest_common.sh@10 -- # set +x 00:16:42.512 19:37:29 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:42.512 19:37:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:42.512 19:37:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:42.512 19:37:29 -- common/autotest_common.sh@10 -- # set +x 00:16:42.512 ************************************ 00:16:42.512 START TEST nvmf_bdevio 00:16:42.512 ************************************ 00:16:42.512 19:37:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:42.512 * Looking for test storage... 00:16:42.512 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:42.512 19:37:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:42.512 19:37:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:42.512 19:37:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:42.771 19:37:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:42.771 19:37:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:42.771 19:37:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:42.771 19:37:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:42.771 19:37:29 -- scripts/common.sh@335 -- # IFS=.-: 00:16:42.771 19:37:29 -- scripts/common.sh@335 -- # read -ra ver1 00:16:42.771 19:37:29 -- scripts/common.sh@336 -- # IFS=.-: 00:16:42.771 19:37:29 -- scripts/common.sh@336 -- # read -ra ver2 00:16:42.771 19:37:29 -- scripts/common.sh@337 -- # local 'op=<' 00:16:42.771 19:37:29 -- scripts/common.sh@339 -- # ver1_l=2 00:16:42.771 19:37:29 -- scripts/common.sh@340 -- # ver2_l=1 00:16:42.771 19:37:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:42.771 19:37:29 -- scripts/common.sh@343 -- # case "$op" in 00:16:42.771 19:37:29 -- scripts/common.sh@344 -- # : 1 00:16:42.771 19:37:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:42.771 19:37:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:42.771 19:37:29 -- scripts/common.sh@364 -- # decimal 1 00:16:42.771 19:37:29 -- scripts/common.sh@352 -- # local d=1 00:16:42.771 19:37:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:42.771 19:37:29 -- scripts/common.sh@354 -- # echo 1 00:16:42.771 19:37:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:42.771 19:37:29 -- scripts/common.sh@365 -- # decimal 2 00:16:42.771 19:37:29 -- scripts/common.sh@352 -- # local d=2 00:16:42.771 19:37:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:42.771 19:37:29 -- scripts/common.sh@354 -- # echo 2 00:16:42.771 19:37:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:42.771 19:37:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:42.771 19:37:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:42.771 19:37:29 -- scripts/common.sh@367 -- # return 0 00:16:42.771 19:37:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:42.771 19:37:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:42.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.771 --rc genhtml_branch_coverage=1 00:16:42.771 --rc genhtml_function_coverage=1 00:16:42.771 --rc genhtml_legend=1 00:16:42.771 --rc geninfo_all_blocks=1 00:16:42.771 --rc geninfo_unexecuted_blocks=1 00:16:42.771 00:16:42.771 ' 00:16:42.771 19:37:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:42.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.771 --rc genhtml_branch_coverage=1 00:16:42.771 --rc genhtml_function_coverage=1 00:16:42.771 --rc genhtml_legend=1 00:16:42.771 --rc geninfo_all_blocks=1 00:16:42.771 --rc geninfo_unexecuted_blocks=1 00:16:42.771 00:16:42.771 ' 00:16:42.771 19:37:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:42.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.771 --rc genhtml_branch_coverage=1 00:16:42.771 --rc genhtml_function_coverage=1 00:16:42.771 --rc genhtml_legend=1 00:16:42.771 --rc geninfo_all_blocks=1 00:16:42.771 --rc geninfo_unexecuted_blocks=1 00:16:42.771 00:16:42.771 ' 00:16:42.771 19:37:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:42.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.771 --rc genhtml_branch_coverage=1 00:16:42.771 --rc genhtml_function_coverage=1 00:16:42.771 --rc genhtml_legend=1 00:16:42.771 --rc geninfo_all_blocks=1 00:16:42.771 --rc geninfo_unexecuted_blocks=1 00:16:42.771 00:16:42.771 ' 00:16:42.771 19:37:29 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:42.771 19:37:29 -- nvmf/common.sh@7 -- # uname -s 00:16:42.771 19:37:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:42.771 19:37:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:42.771 19:37:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:42.771 19:37:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:42.771 19:37:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:42.771 19:37:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:42.771 19:37:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:42.771 19:37:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:42.771 19:37:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:42.771 19:37:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:42.771 19:37:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:16:42.771 19:37:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:16:42.771 19:37:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:42.771 19:37:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:42.771 19:37:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:42.771 19:37:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:42.771 19:37:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:42.771 19:37:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:42.771 19:37:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:42.771 19:37:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.771 19:37:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.772 19:37:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.772 19:37:29 -- paths/export.sh@5 -- # export PATH 00:16:42.772 19:37:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.772 19:37:29 -- nvmf/common.sh@46 -- # : 0 00:16:42.772 19:37:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:42.772 19:37:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:42.772 19:37:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:42.772 19:37:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:42.772 19:37:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:42.772 19:37:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:42.772 19:37:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:42.772 19:37:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:42.772 19:37:29 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:42.772 19:37:29 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:42.772 19:37:29 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:42.772 19:37:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:42.772 19:37:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:42.772 19:37:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:42.772 19:37:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:42.772 19:37:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:42.772 19:37:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.772 19:37:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:42.772 19:37:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:42.772 19:37:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:42.772 19:37:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:42.772 19:37:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:42.772 19:37:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:42.772 19:37:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:42.772 19:37:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:42.772 19:37:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:42.772 19:37:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:42.772 19:37:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:42.772 19:37:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:42.772 19:37:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:42.772 19:37:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:42.772 19:37:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:42.772 19:37:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:42.772 19:37:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:42.772 19:37:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:42.772 19:37:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:42.772 19:37:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:42.772 19:37:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:42.772 19:37:29 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:42.772 Cannot find device "nvmf_tgt_br" 00:16:42.772 19:37:29 -- nvmf/common.sh@154 -- # true 00:16:42.772 19:37:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:42.772 Cannot find device "nvmf_tgt_br2" 00:16:42.772 19:37:29 -- nvmf/common.sh@155 -- # true 00:16:42.772 19:37:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:42.772 19:37:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:42.772 Cannot find device "nvmf_tgt_br" 00:16:42.772 19:37:29 -- nvmf/common.sh@157 -- # true 00:16:42.772 19:37:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:42.772 Cannot find device "nvmf_tgt_br2" 00:16:42.772 19:37:29 -- nvmf/common.sh@158 -- # true 00:16:42.772 19:37:29 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:42.772 19:37:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:42.772 19:37:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:42.772 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:42.772 19:37:29 -- nvmf/common.sh@161 -- # true 00:16:42.772 19:37:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:42.772 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:42.772 19:37:29 -- nvmf/common.sh@162 -- # true 00:16:42.772 19:37:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:42.772 19:37:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:43.031 19:37:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:43.031 19:37:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:43.031 19:37:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:43.031 19:37:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:43.031 19:37:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:43.031 19:37:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:43.031 19:37:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:43.031 19:37:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:43.031 19:37:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:43.031 19:37:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:43.031 19:37:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:43.031 19:37:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:43.031 19:37:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:43.031 19:37:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:43.031 19:37:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:43.031 19:37:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:43.031 19:37:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:43.031 19:37:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:43.031 19:37:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:43.031 19:37:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:43.031 19:37:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:43.031 19:37:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:43.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:43.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:16:43.031 00:16:43.031 --- 10.0.0.2 ping statistics --- 00:16:43.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.031 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:16:43.031 19:37:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:43.031 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:43.031 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.104 ms 00:16:43.031 00:16:43.031 --- 10.0.0.3 ping statistics --- 00:16:43.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.031 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:16:43.031 19:37:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:43.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:43.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:43.031 00:16:43.031 --- 10.0.0.1 ping statistics --- 00:16:43.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.031 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:43.031 19:37:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:43.031 19:37:29 -- nvmf/common.sh@421 -- # return 0 00:16:43.031 19:37:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:43.031 19:37:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:43.031 19:37:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:43.031 19:37:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:43.031 19:37:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:43.031 19:37:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:43.031 19:37:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:43.031 19:37:29 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:43.031 19:37:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:43.031 19:37:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:43.031 19:37:29 -- common/autotest_common.sh@10 -- # set +x 00:16:43.031 19:37:29 -- nvmf/common.sh@469 -- # nvmfpid=87665 00:16:43.031 19:37:29 -- nvmf/common.sh@470 -- # waitforlisten 87665 00:16:43.031 19:37:29 -- common/autotest_common.sh@829 -- # '[' -z 87665 ']' 00:16:43.031 19:37:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.031 19:37:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:43.031 19:37:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:43.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.031 19:37:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.031 19:37:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:43.031 19:37:29 -- common/autotest_common.sh@10 -- # set +x 00:16:43.031 [2024-12-15 19:37:29.902051] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:43.031 [2024-12-15 19:37:29.902159] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.289 [2024-12-15 19:37:30.033742] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:43.289 [2024-12-15 19:37:30.127630] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:43.289 [2024-12-15 19:37:30.127766] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:43.289 [2024-12-15 19:37:30.127779] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:43.289 [2024-12-15 19:37:30.127787] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:43.289 [2024-12-15 19:37:30.128242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:43.289 [2024-12-15 19:37:30.128391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:43.289 [2024-12-15 19:37:30.128437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:43.289 [2024-12-15 19:37:30.128437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:44.224 19:37:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:44.224 19:37:30 -- common/autotest_common.sh@862 -- # return 0 00:16:44.225 19:37:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:44.225 19:37:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:44.225 19:37:30 -- common/autotest_common.sh@10 -- # set +x 00:16:44.225 19:37:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:44.225 19:37:30 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:44.225 19:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.225 19:37:30 -- common/autotest_common.sh@10 -- # set +x 00:16:44.225 [2024-12-15 19:37:31.008319] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:44.225 19:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.225 19:37:31 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:44.225 19:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.225 19:37:31 -- common/autotest_common.sh@10 -- # set +x 00:16:44.225 Malloc0 00:16:44.225 19:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.225 19:37:31 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:44.225 19:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.225 19:37:31 -- common/autotest_common.sh@10 -- # set +x 00:16:44.225 19:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.225 19:37:31 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:44.225 19:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.225 19:37:31 -- common/autotest_common.sh@10 -- # set +x 00:16:44.225 19:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.225 19:37:31 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:44.225 19:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.225 19:37:31 -- common/autotest_common.sh@10 -- # set +x 00:16:44.225 [2024-12-15 19:37:31.084936] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:44.225 19:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.225 19:37:31 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:44.225 19:37:31 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:44.225 19:37:31 -- nvmf/common.sh@520 -- # config=() 00:16:44.225 19:37:31 -- nvmf/common.sh@520 -- # local subsystem config 00:16:44.225 19:37:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:44.225 19:37:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:44.225 { 00:16:44.225 "params": { 00:16:44.225 "name": "Nvme$subsystem", 00:16:44.225 "trtype": "$TEST_TRANSPORT", 00:16:44.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:44.225 "adrfam": "ipv4", 00:16:44.225 "trsvcid": "$NVMF_PORT", 00:16:44.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:44.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:44.225 "hdgst": ${hdgst:-false}, 00:16:44.225 "ddgst": ${ddgst:-false} 00:16:44.225 }, 00:16:44.225 "method": "bdev_nvme_attach_controller" 00:16:44.225 } 00:16:44.225 EOF 00:16:44.225 )") 00:16:44.225 19:37:31 -- nvmf/common.sh@542 -- # cat 00:16:44.225 19:37:31 -- nvmf/common.sh@544 -- # jq . 00:16:44.225 19:37:31 -- nvmf/common.sh@545 -- # IFS=, 00:16:44.225 19:37:31 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:44.225 "params": { 00:16:44.225 "name": "Nvme1", 00:16:44.225 "trtype": "tcp", 00:16:44.225 "traddr": "10.0.0.2", 00:16:44.225 "adrfam": "ipv4", 00:16:44.225 "trsvcid": "4420", 00:16:44.225 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:44.225 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:44.225 "hdgst": false, 00:16:44.225 "ddgst": false 00:16:44.225 }, 00:16:44.225 "method": "bdev_nvme_attach_controller" 00:16:44.225 }' 00:16:44.483 [2024-12-15 19:37:31.147533] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:44.483 [2024-12-15 19:37:31.147651] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87723 ] 00:16:44.483 [2024-12-15 19:37:31.287959] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:44.483 [2024-12-15 19:37:31.365643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:44.483 [2024-12-15 19:37:31.365765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:44.483 [2024-12-15 19:37:31.365771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.741 [2024-12-15 19:37:31.565306] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:44.742 [2024-12-15 19:37:31.565372] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:44.742 I/O targets: 00:16:44.742 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:44.742 00:16:44.742 00:16:44.742 CUnit - A unit testing framework for C - Version 2.1-3 00:16:44.742 http://cunit.sourceforge.net/ 00:16:44.742 00:16:44.742 00:16:44.742 Suite: bdevio tests on: Nvme1n1 00:16:44.742 Test: blockdev write read block ...passed 00:16:45.000 Test: blockdev write zeroes read block ...passed 00:16:45.000 Test: blockdev write zeroes read no split ...passed 00:16:45.000 Test: blockdev write zeroes read split ...passed 00:16:45.000 Test: blockdev write zeroes read split partial ...passed 00:16:45.000 Test: blockdev reset ...[2024-12-15 19:37:31.680916] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:45.000 [2024-12-15 19:37:31.681025] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcaeee0 (9): Bad file descriptor 00:16:45.000 passed 00:16:45.000 Test: blockdev write read 8 blocks ...[2024-12-15 19:37:31.695573] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:45.000 passed 00:16:45.000 Test: blockdev write read size > 128k ...passed 00:16:45.000 Test: blockdev write read invalid size ...passed 00:16:45.000 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:45.000 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:45.000 Test: blockdev write read max offset ...passed 00:16:45.000 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:45.000 Test: blockdev writev readv 8 blocks ...passed 00:16:45.000 Test: blockdev writev readv 30 x 1block ...passed 00:16:45.000 Test: blockdev writev readv block ...passed 00:16:45.000 Test: blockdev writev readv size > 128k ...passed 00:16:45.000 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:45.000 Test: blockdev comparev and writev ...[2024-12-15 19:37:31.867992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:45.000 [2024-12-15 19:37:31.868053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.000 [2024-12-15 19:37:31.868071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:45.000 [2024-12-15 19:37:31.868088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.000 [2024-12-15 19:37:31.868511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:45.000 [2024-12-15 19:37:31.868532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:45.000 [2024-12-15 19:37:31.868548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:45.001 [2024-12-15 19:37:31.868558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:45.001 [2024-12-15 19:37:31.868947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:45.001 [2024-12-15 19:37:31.868964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:45.001 [2024-12-15 19:37:31.868979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:45.001 [2024-12-15 19:37:31.868988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:45.001 [2024-12-15 19:37:31.869326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:45.001 [2024-12-15 19:37:31.869341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:45.001 [2024-12-15 19:37:31.869355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:45.001 [2024-12-15 19:37:31.869364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:45.259 passed 00:16:45.259 Test: blockdev nvme passthru rw ...passed 00:16:45.259 Test: blockdev nvme passthru vendor specific ...passed 00:16:45.259 Test: blockdev nvme admin passthru ...[2024-12-15 19:37:31.951260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:45.259 [2024-12-15 19:37:31.951285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:45.259 [2024-12-15 19:37:31.951441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:45.259 [2024-12-15 19:37:31.951456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:45.259 [2024-12-15 19:37:31.951586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:45.259 [2024-12-15 19:37:31.951599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:45.259 [2024-12-15 19:37:31.951717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:45.259 [2024-12-15 19:37:31.951731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:45.259 passed 00:16:45.259 Test: blockdev copy ...passed 00:16:45.259 00:16:45.259 Run Summary: Type Total Ran Passed Failed Inactive 00:16:45.259 suites 1 1 n/a 0 0 00:16:45.259 tests 23 23 23 0 0 00:16:45.259 asserts 152 152 152 0 n/a 00:16:45.259 00:16:45.259 Elapsed time = 0.893 seconds 00:16:45.516 19:37:32 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:45.516 19:37:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.516 19:37:32 -- common/autotest_common.sh@10 -- # set +x 00:16:45.516 19:37:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.516 19:37:32 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:45.516 19:37:32 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:45.516 19:37:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:45.516 19:37:32 -- nvmf/common.sh@116 -- # sync 00:16:45.516 19:37:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:45.516 19:37:32 -- nvmf/common.sh@119 -- # set +e 00:16:45.516 19:37:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:45.516 19:37:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:45.516 rmmod nvme_tcp 00:16:45.516 rmmod nvme_fabrics 00:16:45.516 rmmod nvme_keyring 00:16:45.516 19:37:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:45.516 19:37:32 -- nvmf/common.sh@123 -- # set -e 00:16:45.516 19:37:32 -- nvmf/common.sh@124 -- # return 0 00:16:45.516 19:37:32 -- nvmf/common.sh@477 -- # '[' -n 87665 ']' 00:16:45.516 19:37:32 -- nvmf/common.sh@478 -- # killprocess 87665 00:16:45.516 19:37:32 -- common/autotest_common.sh@936 -- # '[' -z 87665 ']' 00:16:45.516 19:37:32 -- common/autotest_common.sh@940 -- # kill -0 87665 00:16:45.516 19:37:32 -- common/autotest_common.sh@941 -- # uname 00:16:45.774 19:37:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:45.774 19:37:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87665 00:16:45.774 killing process with pid 87665 00:16:45.774 19:37:32 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:16:45.774 19:37:32 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:16:45.774 19:37:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87665' 00:16:45.774 19:37:32 -- common/autotest_common.sh@955 -- # kill 87665 00:16:45.774 19:37:32 -- common/autotest_common.sh@960 -- # wait 87665 00:16:46.033 19:37:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:46.033 19:37:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:46.033 19:37:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:46.033 19:37:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:46.033 19:37:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:46.033 19:37:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.033 19:37:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:46.033 19:37:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.033 19:37:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:46.033 00:16:46.033 real 0m3.517s 00:16:46.033 user 0m12.608s 00:16:46.033 sys 0m0.934s 00:16:46.033 19:37:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:46.033 19:37:32 -- common/autotest_common.sh@10 -- # set +x 00:16:46.033 ************************************ 00:16:46.033 END TEST nvmf_bdevio 00:16:46.033 ************************************ 00:16:46.033 19:37:32 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:16:46.033 19:37:32 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:46.033 19:37:32 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:46.033 19:37:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:46.033 19:37:32 -- common/autotest_common.sh@10 -- # set +x 00:16:46.033 ************************************ 00:16:46.033 START TEST nvmf_bdevio_no_huge 00:16:46.033 ************************************ 00:16:46.033 19:37:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:46.293 * Looking for test storage... 00:16:46.293 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:46.293 19:37:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:46.293 19:37:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:46.293 19:37:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:46.293 19:37:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:46.293 19:37:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:46.293 19:37:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:46.293 19:37:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:46.293 19:37:33 -- scripts/common.sh@335 -- # IFS=.-: 00:16:46.293 19:37:33 -- scripts/common.sh@335 -- # read -ra ver1 00:16:46.293 19:37:33 -- scripts/common.sh@336 -- # IFS=.-: 00:16:46.293 19:37:33 -- scripts/common.sh@336 -- # read -ra ver2 00:16:46.293 19:37:33 -- scripts/common.sh@337 -- # local 'op=<' 00:16:46.293 19:37:33 -- scripts/common.sh@339 -- # ver1_l=2 00:16:46.293 19:37:33 -- scripts/common.sh@340 -- # ver2_l=1 00:16:46.293 19:37:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:46.293 19:37:33 -- scripts/common.sh@343 -- # case "$op" in 00:16:46.293 19:37:33 -- scripts/common.sh@344 -- # : 1 00:16:46.293 19:37:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:46.293 19:37:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:46.293 19:37:33 -- scripts/common.sh@364 -- # decimal 1 00:16:46.293 19:37:33 -- scripts/common.sh@352 -- # local d=1 00:16:46.293 19:37:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:46.293 19:37:33 -- scripts/common.sh@354 -- # echo 1 00:16:46.293 19:37:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:46.293 19:37:33 -- scripts/common.sh@365 -- # decimal 2 00:16:46.293 19:37:33 -- scripts/common.sh@352 -- # local d=2 00:16:46.293 19:37:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:46.293 19:37:33 -- scripts/common.sh@354 -- # echo 2 00:16:46.293 19:37:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:46.293 19:37:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:46.293 19:37:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:46.293 19:37:33 -- scripts/common.sh@367 -- # return 0 00:16:46.293 19:37:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:46.293 19:37:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:46.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.293 --rc genhtml_branch_coverage=1 00:16:46.293 --rc genhtml_function_coverage=1 00:16:46.293 --rc genhtml_legend=1 00:16:46.293 --rc geninfo_all_blocks=1 00:16:46.293 --rc geninfo_unexecuted_blocks=1 00:16:46.293 00:16:46.293 ' 00:16:46.293 19:37:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:46.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.293 --rc genhtml_branch_coverage=1 00:16:46.293 --rc genhtml_function_coverage=1 00:16:46.293 --rc genhtml_legend=1 00:16:46.293 --rc geninfo_all_blocks=1 00:16:46.293 --rc geninfo_unexecuted_blocks=1 00:16:46.293 00:16:46.293 ' 00:16:46.293 19:37:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:46.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.293 --rc genhtml_branch_coverage=1 00:16:46.293 --rc genhtml_function_coverage=1 00:16:46.293 --rc genhtml_legend=1 00:16:46.293 --rc geninfo_all_blocks=1 00:16:46.293 --rc geninfo_unexecuted_blocks=1 00:16:46.293 00:16:46.293 ' 00:16:46.293 19:37:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:46.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.293 --rc genhtml_branch_coverage=1 00:16:46.293 --rc genhtml_function_coverage=1 00:16:46.293 --rc genhtml_legend=1 00:16:46.293 --rc geninfo_all_blocks=1 00:16:46.293 --rc geninfo_unexecuted_blocks=1 00:16:46.293 00:16:46.293 ' 00:16:46.293 19:37:33 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:46.293 19:37:33 -- nvmf/common.sh@7 -- # uname -s 00:16:46.293 19:37:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:46.293 19:37:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:46.293 19:37:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:46.293 19:37:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:46.293 19:37:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:46.293 19:37:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:46.293 19:37:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:46.293 19:37:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:46.293 19:37:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:46.293 19:37:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:46.293 19:37:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:16:46.293 19:37:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:16:46.293 19:37:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:46.293 19:37:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:46.294 19:37:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:46.294 19:37:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:46.294 19:37:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:46.294 19:37:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:46.294 19:37:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:46.294 19:37:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.294 19:37:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.294 19:37:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.294 19:37:33 -- paths/export.sh@5 -- # export PATH 00:16:46.294 19:37:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.294 19:37:33 -- nvmf/common.sh@46 -- # : 0 00:16:46.294 19:37:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:46.294 19:37:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:46.294 19:37:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:46.294 19:37:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:46.294 19:37:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:46.294 19:37:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:46.294 19:37:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:46.294 19:37:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:46.294 19:37:33 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:46.294 19:37:33 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:46.294 19:37:33 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:46.294 19:37:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:46.294 19:37:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:46.294 19:37:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:46.294 19:37:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:46.294 19:37:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:46.294 19:37:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.294 19:37:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:46.294 19:37:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.294 19:37:33 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:46.294 19:37:33 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:46.294 19:37:33 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:46.294 19:37:33 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:46.294 19:37:33 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:46.294 19:37:33 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:46.294 19:37:33 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:46.294 19:37:33 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:46.294 19:37:33 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:46.294 19:37:33 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:46.294 19:37:33 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:46.294 19:37:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:46.294 19:37:33 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:46.294 19:37:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:46.294 19:37:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:46.294 19:37:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:46.294 19:37:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:46.294 19:37:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:46.294 19:37:33 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:46.294 19:37:33 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:46.294 Cannot find device "nvmf_tgt_br" 00:16:46.294 19:37:33 -- nvmf/common.sh@154 -- # true 00:16:46.294 19:37:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:46.294 Cannot find device "nvmf_tgt_br2" 00:16:46.294 19:37:33 -- nvmf/common.sh@155 -- # true 00:16:46.294 19:37:33 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:46.294 19:37:33 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:46.294 Cannot find device "nvmf_tgt_br" 00:16:46.294 19:37:33 -- nvmf/common.sh@157 -- # true 00:16:46.294 19:37:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:46.294 Cannot find device "nvmf_tgt_br2" 00:16:46.294 19:37:33 -- nvmf/common.sh@158 -- # true 00:16:46.294 19:37:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:46.294 19:37:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:46.553 19:37:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:46.553 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:46.553 19:37:33 -- nvmf/common.sh@161 -- # true 00:16:46.553 19:37:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:46.553 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:46.553 19:37:33 -- nvmf/common.sh@162 -- # true 00:16:46.553 19:37:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:46.553 19:37:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:46.553 19:37:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:46.553 19:37:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:46.553 19:37:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:46.553 19:37:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:46.553 19:37:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:46.553 19:37:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:46.553 19:37:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:46.553 19:37:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:46.553 19:37:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:46.553 19:37:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:46.553 19:37:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:46.553 19:37:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:46.553 19:37:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:46.553 19:37:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:46.553 19:37:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:46.553 19:37:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:46.553 19:37:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:46.553 19:37:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:46.553 19:37:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:46.553 19:37:33 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:46.553 19:37:33 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:46.553 19:37:33 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:46.553 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:46.553 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:16:46.554 00:16:46.554 --- 10.0.0.2 ping statistics --- 00:16:46.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.554 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:16:46.554 19:37:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:46.554 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:46.554 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:16:46.554 00:16:46.554 --- 10.0.0.3 ping statistics --- 00:16:46.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.554 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:16:46.554 19:37:33 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:46.554 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:46.554 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:16:46.554 00:16:46.554 --- 10.0.0.1 ping statistics --- 00:16:46.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.554 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:16:46.554 19:37:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:46.554 19:37:33 -- nvmf/common.sh@421 -- # return 0 00:16:46.554 19:37:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:46.554 19:37:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:46.554 19:37:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:46.554 19:37:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:46.554 19:37:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:46.554 19:37:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:46.554 19:37:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:46.554 19:37:33 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:46.554 19:37:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:46.554 19:37:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:46.554 19:37:33 -- common/autotest_common.sh@10 -- # set +x 00:16:46.554 19:37:33 -- nvmf/common.sh@469 -- # nvmfpid=87911 00:16:46.554 19:37:33 -- nvmf/common.sh@470 -- # waitforlisten 87911 00:16:46.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.554 19:37:33 -- common/autotest_common.sh@829 -- # '[' -z 87911 ']' 00:16:46.554 19:37:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:46.554 19:37:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.554 19:37:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:46.554 19:37:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.554 19:37:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:46.554 19:37:33 -- common/autotest_common.sh@10 -- # set +x 00:16:46.813 [2024-12-15 19:37:33.480348] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:46.813 [2024-12-15 19:37:33.480633] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:46.813 [2024-12-15 19:37:33.627441] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:47.071 [2024-12-15 19:37:33.728406] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:47.071 [2024-12-15 19:37:33.728958] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:47.071 [2024-12-15 19:37:33.729153] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:47.071 [2024-12-15 19:37:33.729583] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:47.071 [2024-12-15 19:37:33.729980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:47.071 [2024-12-15 19:37:33.730132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:47.071 [2024-12-15 19:37:33.730208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:47.071 [2024-12-15 19:37:33.730209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:47.638 19:37:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:47.638 19:37:34 -- common/autotest_common.sh@862 -- # return 0 00:16:47.638 19:37:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:47.638 19:37:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:47.638 19:37:34 -- common/autotest_common.sh@10 -- # set +x 00:16:47.638 19:37:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:47.638 19:37:34 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:47.638 19:37:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.638 19:37:34 -- common/autotest_common.sh@10 -- # set +x 00:16:47.897 [2024-12-15 19:37:34.539625] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:47.897 19:37:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.897 19:37:34 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:47.897 19:37:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.897 19:37:34 -- common/autotest_common.sh@10 -- # set +x 00:16:47.897 Malloc0 00:16:47.897 19:37:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.897 19:37:34 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:47.897 19:37:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.897 19:37:34 -- common/autotest_common.sh@10 -- # set +x 00:16:47.897 19:37:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.897 19:37:34 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:47.897 19:37:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.897 19:37:34 -- common/autotest_common.sh@10 -- # set +x 00:16:47.897 19:37:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.897 19:37:34 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:47.897 19:37:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.897 19:37:34 -- common/autotest_common.sh@10 -- # set +x 00:16:47.897 [2024-12-15 19:37:34.583972] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:47.897 19:37:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.897 19:37:34 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:47.897 19:37:34 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:47.897 19:37:34 -- nvmf/common.sh@520 -- # config=() 00:16:47.897 19:37:34 -- nvmf/common.sh@520 -- # local subsystem config 00:16:47.897 19:37:34 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:47.897 19:37:34 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:47.897 { 00:16:47.897 "params": { 00:16:47.897 "name": "Nvme$subsystem", 00:16:47.897 "trtype": "$TEST_TRANSPORT", 00:16:47.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:47.897 "adrfam": "ipv4", 00:16:47.897 "trsvcid": "$NVMF_PORT", 00:16:47.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:47.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:47.897 "hdgst": ${hdgst:-false}, 00:16:47.897 "ddgst": ${ddgst:-false} 00:16:47.897 }, 00:16:47.897 "method": "bdev_nvme_attach_controller" 00:16:47.897 } 00:16:47.897 EOF 00:16:47.897 )") 00:16:47.897 19:37:34 -- nvmf/common.sh@542 -- # cat 00:16:47.897 19:37:34 -- nvmf/common.sh@544 -- # jq . 00:16:47.897 19:37:34 -- nvmf/common.sh@545 -- # IFS=, 00:16:47.897 19:37:34 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:47.897 "params": { 00:16:47.897 "name": "Nvme1", 00:16:47.897 "trtype": "tcp", 00:16:47.897 "traddr": "10.0.0.2", 00:16:47.897 "adrfam": "ipv4", 00:16:47.897 "trsvcid": "4420", 00:16:47.897 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:47.897 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:47.897 "hdgst": false, 00:16:47.897 "ddgst": false 00:16:47.897 }, 00:16:47.897 "method": "bdev_nvme_attach_controller" 00:16:47.897 }' 00:16:47.897 [2024-12-15 19:37:34.644175] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:47.897 [2024-12-15 19:37:34.644474] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid87965 ] 00:16:47.897 [2024-12-15 19:37:34.786506] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:48.156 [2024-12-15 19:37:34.933255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:48.156 [2024-12-15 19:37:34.933373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:48.156 [2024-12-15 19:37:34.933382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.416 [2024-12-15 19:37:35.154068] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:48.416 [2024-12-15 19:37:35.154480] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:48.416 I/O targets: 00:16:48.416 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:48.416 00:16:48.416 00:16:48.416 CUnit - A unit testing framework for C - Version 2.1-3 00:16:48.416 http://cunit.sourceforge.net/ 00:16:48.416 00:16:48.416 00:16:48.416 Suite: bdevio tests on: Nvme1n1 00:16:48.416 Test: blockdev write read block ...passed 00:16:48.416 Test: blockdev write zeroes read block ...passed 00:16:48.416 Test: blockdev write zeroes read no split ...passed 00:16:48.416 Test: blockdev write zeroes read split ...passed 00:16:48.416 Test: blockdev write zeroes read split partial ...passed 00:16:48.416 Test: blockdev reset ...[2024-12-15 19:37:35.278139] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:48.416 [2024-12-15 19:37:35.278407] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4cdd10 (9): Bad file descriptor 00:16:48.416 [2024-12-15 19:37:35.292019] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:48.416 passed 00:16:48.416 Test: blockdev write read 8 blocks ...passed 00:16:48.416 Test: blockdev write read size > 128k ...passed 00:16:48.416 Test: blockdev write read invalid size ...passed 00:16:48.675 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:48.675 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:48.675 Test: blockdev write read max offset ...passed 00:16:48.675 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:48.675 Test: blockdev writev readv 8 blocks ...passed 00:16:48.675 Test: blockdev writev readv 30 x 1block ...passed 00:16:48.675 Test: blockdev writev readv block ...passed 00:16:48.675 Test: blockdev writev readv size > 128k ...passed 00:16:48.675 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:48.675 Test: blockdev comparev and writev ...[2024-12-15 19:37:35.464887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:48.675 [2024-12-15 19:37:35.465150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.675 [2024-12-15 19:37:35.465262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:48.675 [2024-12-15 19:37:35.465343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:48.675 [2024-12-15 19:37:35.465743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:48.675 [2024-12-15 19:37:35.465991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:48.675 [2024-12-15 19:37:35.466199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:48.675 [2024-12-15 19:37:35.466505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:48.675 [2024-12-15 19:37:35.467002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:48.675 [2024-12-15 19:37:35.467204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:48.675 [2024-12-15 19:37:35.467419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:48.675 [2024-12-15 19:37:35.467621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:48.675 [2024-12-15 19:37:35.468150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:48.675 [2024-12-15 19:37:35.468314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:48.675 [2024-12-15 19:37:35.468486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:48.676 [2024-12-15 19:37:35.468679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:48.676 passed 00:16:48.676 Test: blockdev nvme passthru rw ...passed 00:16:48.676 Test: blockdev nvme passthru vendor specific ...[2024-12-15 19:37:35.552394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:48.676 [2024-12-15 19:37:35.552549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:48.676 [2024-12-15 19:37:35.552751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:48.676 [2024-12-15 19:37:35.552863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:48.676 [2024-12-15 19:37:35.553060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:48.676 [2024-12-15 19:37:35.553235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:48.676 [2024-12-15 19:37:35.553504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:48.676 [2024-12-15 19:37:35.553675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:48.676 passed 00:16:48.676 Test: blockdev nvme admin passthru ...passed 00:16:48.935 Test: blockdev copy ...passed 00:16:48.935 00:16:48.935 Run Summary: Type Total Ran Passed Failed Inactive 00:16:48.935 suites 1 1 n/a 0 0 00:16:48.935 tests 23 23 23 0 0 00:16:48.935 asserts 152 152 152 0 n/a 00:16:48.935 00:16:48.935 Elapsed time = 0.908 seconds 00:16:49.194 19:37:35 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:49.194 19:37:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.194 19:37:35 -- common/autotest_common.sh@10 -- # set +x 00:16:49.194 19:37:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.194 19:37:36 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:49.194 19:37:36 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:49.194 19:37:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:49.194 19:37:36 -- nvmf/common.sh@116 -- # sync 00:16:49.194 19:37:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:49.194 19:37:36 -- nvmf/common.sh@119 -- # set +e 00:16:49.194 19:37:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:49.194 19:37:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:49.194 rmmod nvme_tcp 00:16:49.194 rmmod nvme_fabrics 00:16:49.452 rmmod nvme_keyring 00:16:49.452 19:37:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:49.452 19:37:36 -- nvmf/common.sh@123 -- # set -e 00:16:49.452 19:37:36 -- nvmf/common.sh@124 -- # return 0 00:16:49.452 19:37:36 -- nvmf/common.sh@477 -- # '[' -n 87911 ']' 00:16:49.452 19:37:36 -- nvmf/common.sh@478 -- # killprocess 87911 00:16:49.452 19:37:36 -- common/autotest_common.sh@936 -- # '[' -z 87911 ']' 00:16:49.452 19:37:36 -- common/autotest_common.sh@940 -- # kill -0 87911 00:16:49.452 19:37:36 -- common/autotest_common.sh@941 -- # uname 00:16:49.452 19:37:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:49.452 19:37:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87911 00:16:49.452 killing process with pid 87911 00:16:49.452 19:37:36 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:16:49.452 19:37:36 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:16:49.452 19:37:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87911' 00:16:49.452 19:37:36 -- common/autotest_common.sh@955 -- # kill 87911 00:16:49.452 19:37:36 -- common/autotest_common.sh@960 -- # wait 87911 00:16:49.711 19:37:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:49.711 19:37:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:49.711 19:37:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:49.711 19:37:36 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:49.711 19:37:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:49.711 19:37:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.711 19:37:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:49.711 19:37:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.711 19:37:36 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:49.711 00:16:49.711 real 0m3.709s 00:16:49.711 user 0m13.271s 00:16:49.711 sys 0m1.427s 00:16:49.711 19:37:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:49.711 ************************************ 00:16:49.711 END TEST nvmf_bdevio_no_huge 00:16:49.711 ************************************ 00:16:49.711 19:37:36 -- common/autotest_common.sh@10 -- # set +x 00:16:49.970 19:37:36 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:49.970 19:37:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:49.970 19:37:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:49.970 19:37:36 -- common/autotest_common.sh@10 -- # set +x 00:16:49.970 ************************************ 00:16:49.970 START TEST nvmf_tls 00:16:49.970 ************************************ 00:16:49.970 19:37:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:49.970 * Looking for test storage... 00:16:49.970 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:49.970 19:37:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:49.970 19:37:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:49.970 19:37:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:49.970 19:37:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:49.970 19:37:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:49.970 19:37:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:49.970 19:37:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:49.970 19:37:36 -- scripts/common.sh@335 -- # IFS=.-: 00:16:49.970 19:37:36 -- scripts/common.sh@335 -- # read -ra ver1 00:16:49.970 19:37:36 -- scripts/common.sh@336 -- # IFS=.-: 00:16:49.970 19:37:36 -- scripts/common.sh@336 -- # read -ra ver2 00:16:49.970 19:37:36 -- scripts/common.sh@337 -- # local 'op=<' 00:16:49.970 19:37:36 -- scripts/common.sh@339 -- # ver1_l=2 00:16:49.970 19:37:36 -- scripts/common.sh@340 -- # ver2_l=1 00:16:49.970 19:37:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:49.970 19:37:36 -- scripts/common.sh@343 -- # case "$op" in 00:16:49.970 19:37:36 -- scripts/common.sh@344 -- # : 1 00:16:49.970 19:37:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:49.970 19:37:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:49.970 19:37:36 -- scripts/common.sh@364 -- # decimal 1 00:16:49.970 19:37:36 -- scripts/common.sh@352 -- # local d=1 00:16:49.970 19:37:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:49.970 19:37:36 -- scripts/common.sh@354 -- # echo 1 00:16:49.970 19:37:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:49.970 19:37:36 -- scripts/common.sh@365 -- # decimal 2 00:16:49.970 19:37:36 -- scripts/common.sh@352 -- # local d=2 00:16:49.970 19:37:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:49.970 19:37:36 -- scripts/common.sh@354 -- # echo 2 00:16:49.970 19:37:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:49.970 19:37:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:49.970 19:37:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:49.970 19:37:36 -- scripts/common.sh@367 -- # return 0 00:16:49.970 19:37:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:49.970 19:37:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:49.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.970 --rc genhtml_branch_coverage=1 00:16:49.970 --rc genhtml_function_coverage=1 00:16:49.970 --rc genhtml_legend=1 00:16:49.970 --rc geninfo_all_blocks=1 00:16:49.970 --rc geninfo_unexecuted_blocks=1 00:16:49.970 00:16:49.970 ' 00:16:49.970 19:37:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:49.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.970 --rc genhtml_branch_coverage=1 00:16:49.970 --rc genhtml_function_coverage=1 00:16:49.970 --rc genhtml_legend=1 00:16:49.970 --rc geninfo_all_blocks=1 00:16:49.970 --rc geninfo_unexecuted_blocks=1 00:16:49.970 00:16:49.970 ' 00:16:49.970 19:37:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:49.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.970 --rc genhtml_branch_coverage=1 00:16:49.970 --rc genhtml_function_coverage=1 00:16:49.970 --rc genhtml_legend=1 00:16:49.970 --rc geninfo_all_blocks=1 00:16:49.970 --rc geninfo_unexecuted_blocks=1 00:16:49.970 00:16:49.970 ' 00:16:49.970 19:37:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:49.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.970 --rc genhtml_branch_coverage=1 00:16:49.970 --rc genhtml_function_coverage=1 00:16:49.970 --rc genhtml_legend=1 00:16:49.970 --rc geninfo_all_blocks=1 00:16:49.970 --rc geninfo_unexecuted_blocks=1 00:16:49.970 00:16:49.970 ' 00:16:49.970 19:37:36 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:49.970 19:37:36 -- nvmf/common.sh@7 -- # uname -s 00:16:49.970 19:37:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:49.970 19:37:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:49.970 19:37:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:49.970 19:37:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:49.970 19:37:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:49.970 19:37:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:49.970 19:37:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:49.970 19:37:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:49.970 19:37:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:49.970 19:37:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:49.970 19:37:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:16:49.970 19:37:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:16:49.970 19:37:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:49.970 19:37:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:49.970 19:37:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:49.970 19:37:36 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:49.970 19:37:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:49.971 19:37:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:49.971 19:37:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:49.971 19:37:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.971 19:37:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.971 19:37:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.971 19:37:36 -- paths/export.sh@5 -- # export PATH 00:16:49.971 19:37:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.971 19:37:36 -- nvmf/common.sh@46 -- # : 0 00:16:49.971 19:37:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:49.971 19:37:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:49.971 19:37:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:49.971 19:37:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:49.971 19:37:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:49.971 19:37:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:49.971 19:37:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:49.971 19:37:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:49.971 19:37:36 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:49.971 19:37:36 -- target/tls.sh@71 -- # nvmftestinit 00:16:49.971 19:37:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:49.971 19:37:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:49.971 19:37:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:49.971 19:37:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:49.971 19:37:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:49.971 19:37:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.971 19:37:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:49.971 19:37:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.971 19:37:36 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:49.971 19:37:36 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:49.971 19:37:36 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:49.971 19:37:36 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:49.971 19:37:36 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:49.971 19:37:36 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:49.971 19:37:36 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:49.971 19:37:36 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:49.971 19:37:36 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:49.971 19:37:36 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:49.971 19:37:36 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:49.971 19:37:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:49.971 19:37:36 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:49.971 19:37:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:49.971 19:37:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:49.971 19:37:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:49.971 19:37:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:49.971 19:37:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:49.971 19:37:36 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:49.971 19:37:36 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:50.230 Cannot find device "nvmf_tgt_br" 00:16:50.230 19:37:36 -- nvmf/common.sh@154 -- # true 00:16:50.230 19:37:36 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:50.230 Cannot find device "nvmf_tgt_br2" 00:16:50.230 19:37:36 -- nvmf/common.sh@155 -- # true 00:16:50.230 19:37:36 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:50.230 19:37:36 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:50.230 Cannot find device "nvmf_tgt_br" 00:16:50.230 19:37:36 -- nvmf/common.sh@157 -- # true 00:16:50.230 19:37:36 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:50.230 Cannot find device "nvmf_tgt_br2" 00:16:50.230 19:37:36 -- nvmf/common.sh@158 -- # true 00:16:50.230 19:37:36 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:50.230 19:37:36 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:50.230 19:37:36 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:50.230 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:50.230 19:37:36 -- nvmf/common.sh@161 -- # true 00:16:50.230 19:37:36 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:50.230 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:50.230 19:37:36 -- nvmf/common.sh@162 -- # true 00:16:50.230 19:37:36 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:50.230 19:37:36 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:50.230 19:37:37 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:50.230 19:37:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:50.230 19:37:37 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:50.230 19:37:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:50.230 19:37:37 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:50.230 19:37:37 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:50.230 19:37:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:50.230 19:37:37 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:50.230 19:37:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:50.230 19:37:37 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:50.230 19:37:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:50.230 19:37:37 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:50.230 19:37:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:50.489 19:37:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:50.489 19:37:37 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:50.489 19:37:37 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:50.489 19:37:37 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:50.490 19:37:37 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:50.490 19:37:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:50.490 19:37:37 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:50.490 19:37:37 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:50.490 19:37:37 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:50.490 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:50.490 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:16:50.490 00:16:50.490 --- 10.0.0.2 ping statistics --- 00:16:50.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.490 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:16:50.490 19:37:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:50.490 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:50.490 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.028 ms 00:16:50.490 00:16:50.490 --- 10.0.0.3 ping statistics --- 00:16:50.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.490 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:16:50.490 19:37:37 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:50.490 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:50.490 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:16:50.490 00:16:50.490 --- 10.0.0.1 ping statistics --- 00:16:50.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.490 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:16:50.490 19:37:37 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:50.490 19:37:37 -- nvmf/common.sh@421 -- # return 0 00:16:50.490 19:37:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:50.490 19:37:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:50.490 19:37:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:50.490 19:37:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:50.490 19:37:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:50.490 19:37:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:50.490 19:37:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:50.490 19:37:37 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:50.490 19:37:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:50.490 19:37:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:50.490 19:37:37 -- common/autotest_common.sh@10 -- # set +x 00:16:50.490 19:37:37 -- nvmf/common.sh@469 -- # nvmfpid=88168 00:16:50.490 19:37:37 -- nvmf/common.sh@470 -- # waitforlisten 88168 00:16:50.490 19:37:37 -- common/autotest_common.sh@829 -- # '[' -z 88168 ']' 00:16:50.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.490 19:37:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.490 19:37:37 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:50.490 19:37:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:50.490 19:37:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.490 19:37:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:50.490 19:37:37 -- common/autotest_common.sh@10 -- # set +x 00:16:50.490 [2024-12-15 19:37:37.299558] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:50.490 [2024-12-15 19:37:37.299849] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:50.748 [2024-12-15 19:37:37.443789] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.748 [2024-12-15 19:37:37.524646] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:50.748 [2024-12-15 19:37:37.524868] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:50.748 [2024-12-15 19:37:37.524886] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:50.748 [2024-12-15 19:37:37.524898] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:50.748 [2024-12-15 19:37:37.524938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:51.689 19:37:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:51.689 19:37:38 -- common/autotest_common.sh@862 -- # return 0 00:16:51.689 19:37:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:51.689 19:37:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:51.689 19:37:38 -- common/autotest_common.sh@10 -- # set +x 00:16:51.689 19:37:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:51.689 19:37:38 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:16:51.689 19:37:38 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:51.689 true 00:16:51.947 19:37:38 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:51.947 19:37:38 -- target/tls.sh@82 -- # jq -r .tls_version 00:16:52.205 19:37:38 -- target/tls.sh@82 -- # version=0 00:16:52.205 19:37:38 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:16:52.205 19:37:38 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:52.463 19:37:39 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:52.463 19:37:39 -- target/tls.sh@90 -- # jq -r .tls_version 00:16:52.722 19:37:39 -- target/tls.sh@90 -- # version=13 00:16:52.722 19:37:39 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:16:52.722 19:37:39 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:52.980 19:37:39 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:52.980 19:37:39 -- target/tls.sh@98 -- # jq -r .tls_version 00:16:53.239 19:37:39 -- target/tls.sh@98 -- # version=7 00:16:53.239 19:37:39 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:16:53.239 19:37:39 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:53.239 19:37:39 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:16:53.497 19:37:40 -- target/tls.sh@105 -- # ktls=false 00:16:53.497 19:37:40 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:16:53.497 19:37:40 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:53.755 19:37:40 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:16:53.755 19:37:40 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:54.014 19:37:40 -- target/tls.sh@113 -- # ktls=true 00:16:54.014 19:37:40 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:16:54.014 19:37:40 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:54.273 19:37:40 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:54.273 19:37:40 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:16:54.273 19:37:41 -- target/tls.sh@121 -- # ktls=false 00:16:54.273 19:37:41 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:16:54.531 19:37:41 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:16:54.531 19:37:41 -- target/tls.sh@49 -- # local key hash crc 00:16:54.531 19:37:41 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:16:54.531 19:37:41 -- target/tls.sh@51 -- # hash=01 00:16:54.531 19:37:41 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:16:54.531 19:37:41 -- target/tls.sh@52 -- # gzip -1 -c 00:16:54.531 19:37:41 -- target/tls.sh@52 -- # tail -c8 00:16:54.531 19:37:41 -- target/tls.sh@52 -- # head -c 4 00:16:54.531 19:37:41 -- target/tls.sh@52 -- # crc='p$H�' 00:16:54.531 19:37:41 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:54.531 19:37:41 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:16:54.531 19:37:41 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:54.531 19:37:41 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:54.531 19:37:41 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:16:54.531 19:37:41 -- target/tls.sh@49 -- # local key hash crc 00:16:54.531 19:37:41 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:16:54.531 19:37:41 -- target/tls.sh@51 -- # hash=01 00:16:54.531 19:37:41 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:16:54.531 19:37:41 -- target/tls.sh@52 -- # gzip -1 -c 00:16:54.531 19:37:41 -- target/tls.sh@52 -- # tail -c8 00:16:54.531 19:37:41 -- target/tls.sh@52 -- # head -c 4 00:16:54.531 19:37:41 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:16:54.531 19:37:41 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:54.531 19:37:41 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:16:54.531 19:37:41 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:54.531 19:37:41 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:54.531 19:37:41 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:54.531 19:37:41 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:54.531 19:37:41 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:54.531 19:37:41 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:54.531 19:37:41 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:54.531 19:37:41 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:54.531 19:37:41 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:54.790 19:37:41 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:55.049 19:37:41 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:55.049 19:37:41 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:55.049 19:37:41 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:55.307 [2024-12-15 19:37:42.095249] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:55.307 19:37:42 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:55.566 19:37:42 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:55.824 [2024-12-15 19:37:42.523324] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:55.824 [2024-12-15 19:37:42.523589] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:55.824 19:37:42 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:56.083 malloc0 00:16:56.083 19:37:42 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:56.341 19:37:43 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:56.600 19:37:43 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:06.604 Initializing NVMe Controllers 00:17:06.604 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:06.604 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:06.604 Initialization complete. Launching workers. 00:17:06.604 ======================================================== 00:17:06.604 Latency(us) 00:17:06.604 Device Information : IOPS MiB/s Average min max 00:17:06.604 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12528.85 48.94 5108.99 1583.51 7552.91 00:17:06.604 ======================================================== 00:17:06.604 Total : 12528.85 48.94 5108.99 1583.51 7552.91 00:17:06.604 00:17:06.604 19:37:53 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:06.604 19:37:53 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:06.604 19:37:53 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:06.604 19:37:53 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:06.604 19:37:53 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:06.604 19:37:53 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:06.604 19:37:53 -- target/tls.sh@28 -- # bdevperf_pid=88540 00:17:06.604 19:37:53 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:06.604 19:37:53 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:06.604 19:37:53 -- target/tls.sh@31 -- # waitforlisten 88540 /var/tmp/bdevperf.sock 00:17:06.604 19:37:53 -- common/autotest_common.sh@829 -- # '[' -z 88540 ']' 00:17:06.604 19:37:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:06.604 19:37:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:06.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:06.604 19:37:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:06.604 19:37:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:06.604 19:37:53 -- common/autotest_common.sh@10 -- # set +x 00:17:06.863 [2024-12-15 19:37:53.503836] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:06.863 [2024-12-15 19:37:53.503960] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88540 ] 00:17:06.863 [2024-12-15 19:37:53.642875] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.863 [2024-12-15 19:37:53.706978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:07.799 19:37:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:07.799 19:37:54 -- common/autotest_common.sh@862 -- # return 0 00:17:07.799 19:37:54 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:08.058 [2024-12-15 19:37:54.737239] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:08.058 TLSTESTn1 00:17:08.058 19:37:54 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:08.058 Running I/O for 10 seconds... 00:17:18.037 00:17:18.037 Latency(us) 00:17:18.037 [2024-12-15T19:38:04.933Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.037 [2024-12-15T19:38:04.933Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:18.037 Verification LBA range: start 0x0 length 0x2000 00:17:18.037 TLSTESTn1 : 10.01 6736.54 26.31 0.00 0.00 18972.13 4021.53 18945.86 00:17:18.037 [2024-12-15T19:38:04.933Z] =================================================================================================================== 00:17:18.037 [2024-12-15T19:38:04.933Z] Total : 6736.54 26.31 0.00 0.00 18972.13 4021.53 18945.86 00:17:18.037 0 00:17:18.295 19:38:04 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:18.295 19:38:04 -- target/tls.sh@45 -- # killprocess 88540 00:17:18.295 19:38:04 -- common/autotest_common.sh@936 -- # '[' -z 88540 ']' 00:17:18.295 19:38:04 -- common/autotest_common.sh@940 -- # kill -0 88540 00:17:18.295 19:38:04 -- common/autotest_common.sh@941 -- # uname 00:17:18.295 19:38:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:18.295 19:38:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88540 00:17:18.295 killing process with pid 88540 00:17:18.295 Received shutdown signal, test time was about 10.000000 seconds 00:17:18.295 00:17:18.295 Latency(us) 00:17:18.295 [2024-12-15T19:38:05.191Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.295 [2024-12-15T19:38:05.191Z] =================================================================================================================== 00:17:18.295 [2024-12-15T19:38:05.191Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:18.295 19:38:04 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:18.295 19:38:04 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:18.295 19:38:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88540' 00:17:18.295 19:38:04 -- common/autotest_common.sh@955 -- # kill 88540 00:17:18.295 19:38:04 -- common/autotest_common.sh@960 -- # wait 88540 00:17:18.553 19:38:05 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:18.553 19:38:05 -- common/autotest_common.sh@650 -- # local es=0 00:17:18.553 19:38:05 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:18.553 19:38:05 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:18.553 19:38:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:18.553 19:38:05 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:18.553 19:38:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:18.553 19:38:05 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:18.553 19:38:05 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:18.553 19:38:05 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:18.553 19:38:05 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:18.553 19:38:05 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:17:18.553 19:38:05 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:18.553 19:38:05 -- target/tls.sh@28 -- # bdevperf_pid=88686 00:17:18.553 19:38:05 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:18.554 19:38:05 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:18.554 19:38:05 -- target/tls.sh@31 -- # waitforlisten 88686 /var/tmp/bdevperf.sock 00:17:18.554 19:38:05 -- common/autotest_common.sh@829 -- # '[' -z 88686 ']' 00:17:18.554 19:38:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:18.554 19:38:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:18.554 19:38:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:18.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:18.554 19:38:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:18.554 19:38:05 -- common/autotest_common.sh@10 -- # set +x 00:17:18.554 [2024-12-15 19:38:05.299469] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:18.554 [2024-12-15 19:38:05.299626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88686 ] 00:17:18.554 [2024-12-15 19:38:05.436096] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.811 [2024-12-15 19:38:05.490183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:19.747 19:38:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:19.747 19:38:06 -- common/autotest_common.sh@862 -- # return 0 00:17:19.747 19:38:06 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:19.747 [2024-12-15 19:38:06.513634] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:19.747 [2024-12-15 19:38:06.523919] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:19.747 [2024-12-15 19:38:06.524248] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x106a7c0 (107): Transport endpoint is not connected 00:17:19.747 [2024-12-15 19:38:06.525231] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x106a7c0 (9): Bad file descriptor 00:17:19.747 [2024-12-15 19:38:06.526227] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:19.747 [2024-12-15 19:38:06.526250] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:19.747 [2024-12-15 19:38:06.526265] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:19.747 2024/12/15 19:38:06 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:19.747 request: 00:17:19.747 { 00:17:19.747 "method": "bdev_nvme_attach_controller", 00:17:19.747 "params": { 00:17:19.747 "name": "TLSTEST", 00:17:19.747 "trtype": "tcp", 00:17:19.747 "traddr": "10.0.0.2", 00:17:19.747 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:19.747 "adrfam": "ipv4", 00:17:19.747 "trsvcid": "4420", 00:17:19.747 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:19.747 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt" 00:17:19.747 } 00:17:19.747 } 00:17:19.747 Got JSON-RPC error response 00:17:19.747 GoRPCClient: error on JSON-RPC call 00:17:19.747 19:38:06 -- target/tls.sh@36 -- # killprocess 88686 00:17:19.747 19:38:06 -- common/autotest_common.sh@936 -- # '[' -z 88686 ']' 00:17:19.747 19:38:06 -- common/autotest_common.sh@940 -- # kill -0 88686 00:17:19.747 19:38:06 -- common/autotest_common.sh@941 -- # uname 00:17:19.747 19:38:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:19.747 19:38:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88686 00:17:19.747 killing process with pid 88686 00:17:19.747 Received shutdown signal, test time was about 10.000000 seconds 00:17:19.747 00:17:19.747 Latency(us) 00:17:19.747 [2024-12-15T19:38:06.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.747 [2024-12-15T19:38:06.643Z] =================================================================================================================== 00:17:19.747 [2024-12-15T19:38:06.643Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:19.747 19:38:06 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:19.747 19:38:06 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:19.747 19:38:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88686' 00:17:19.747 19:38:06 -- common/autotest_common.sh@955 -- # kill 88686 00:17:19.747 19:38:06 -- common/autotest_common.sh@960 -- # wait 88686 00:17:20.006 19:38:06 -- target/tls.sh@37 -- # return 1 00:17:20.006 19:38:06 -- common/autotest_common.sh@653 -- # es=1 00:17:20.006 19:38:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:20.006 19:38:06 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:20.006 19:38:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:20.006 19:38:06 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:20.006 19:38:06 -- common/autotest_common.sh@650 -- # local es=0 00:17:20.006 19:38:06 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:20.006 19:38:06 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:20.006 19:38:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:20.006 19:38:06 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:20.006 19:38:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:20.006 19:38:06 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:20.006 19:38:06 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:20.006 19:38:06 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:20.006 19:38:06 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:20.006 19:38:06 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:20.006 19:38:06 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:20.006 19:38:06 -- target/tls.sh@28 -- # bdevperf_pid=88732 00:17:20.006 19:38:06 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:20.006 19:38:06 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:20.006 19:38:06 -- target/tls.sh@31 -- # waitforlisten 88732 /var/tmp/bdevperf.sock 00:17:20.006 19:38:06 -- common/autotest_common.sh@829 -- # '[' -z 88732 ']' 00:17:20.006 19:38:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:20.006 19:38:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:20.006 19:38:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:20.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:20.006 19:38:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:20.006 19:38:06 -- common/autotest_common.sh@10 -- # set +x 00:17:20.006 [2024-12-15 19:38:06.876770] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:20.006 [2024-12-15 19:38:06.876891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88732 ] 00:17:20.265 [2024-12-15 19:38:07.006666] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.265 [2024-12-15 19:38:07.075346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:21.203 19:38:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:21.203 19:38:07 -- common/autotest_common.sh@862 -- # return 0 00:17:21.203 19:38:07 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:21.203 [2024-12-15 19:38:08.038610] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:21.203 [2024-12-15 19:38:08.044045] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:21.203 [2024-12-15 19:38:08.044084] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:21.203 [2024-12-15 19:38:08.044134] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:21.203 [2024-12-15 19:38:08.045066] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda97c0 (107): Transport endpoint is not connected 00:17:21.203 [2024-12-15 19:38:08.046058] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda97c0 (9): Bad file descriptor 00:17:21.203 [2024-12-15 19:38:08.047055] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:21.203 [2024-12-15 19:38:08.047078] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:21.203 [2024-12-15 19:38:08.047087] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:21.203 2024/12/15 19:38:08 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:21.203 request: 00:17:21.203 { 00:17:21.203 "method": "bdev_nvme_attach_controller", 00:17:21.203 "params": { 00:17:21.203 "name": "TLSTEST", 00:17:21.203 "trtype": "tcp", 00:17:21.203 "traddr": "10.0.0.2", 00:17:21.203 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:21.203 "adrfam": "ipv4", 00:17:21.203 "trsvcid": "4420", 00:17:21.203 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:21.203 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:17:21.203 } 00:17:21.203 } 00:17:21.203 Got JSON-RPC error response 00:17:21.203 GoRPCClient: error on JSON-RPC call 00:17:21.203 19:38:08 -- target/tls.sh@36 -- # killprocess 88732 00:17:21.203 19:38:08 -- common/autotest_common.sh@936 -- # '[' -z 88732 ']' 00:17:21.203 19:38:08 -- common/autotest_common.sh@940 -- # kill -0 88732 00:17:21.203 19:38:08 -- common/autotest_common.sh@941 -- # uname 00:17:21.203 19:38:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:21.203 19:38:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88732 00:17:21.464 killing process with pid 88732 00:17:21.464 Received shutdown signal, test time was about 10.000000 seconds 00:17:21.464 00:17:21.464 Latency(us) 00:17:21.464 [2024-12-15T19:38:08.360Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.464 [2024-12-15T19:38:08.360Z] =================================================================================================================== 00:17:21.464 [2024-12-15T19:38:08.360Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:21.464 19:38:08 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:21.464 19:38:08 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:21.464 19:38:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88732' 00:17:21.464 19:38:08 -- common/autotest_common.sh@955 -- # kill 88732 00:17:21.464 19:38:08 -- common/autotest_common.sh@960 -- # wait 88732 00:17:21.723 19:38:08 -- target/tls.sh@37 -- # return 1 00:17:21.723 19:38:08 -- common/autotest_common.sh@653 -- # es=1 00:17:21.723 19:38:08 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:21.723 19:38:08 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:21.723 19:38:08 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:21.723 19:38:08 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:21.723 19:38:08 -- common/autotest_common.sh@650 -- # local es=0 00:17:21.723 19:38:08 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:21.723 19:38:08 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:21.723 19:38:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:21.723 19:38:08 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:21.723 19:38:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:21.723 19:38:08 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:21.723 19:38:08 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:21.723 19:38:08 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:21.723 19:38:08 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:21.723 19:38:08 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:21.723 19:38:08 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:21.723 19:38:08 -- target/tls.sh@28 -- # bdevperf_pid=88777 00:17:21.723 19:38:08 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:21.723 19:38:08 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:21.723 19:38:08 -- target/tls.sh@31 -- # waitforlisten 88777 /var/tmp/bdevperf.sock 00:17:21.723 19:38:08 -- common/autotest_common.sh@829 -- # '[' -z 88777 ']' 00:17:21.723 19:38:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:21.723 19:38:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:21.723 19:38:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:21.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:21.723 19:38:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:21.723 19:38:08 -- common/autotest_common.sh@10 -- # set +x 00:17:21.723 [2024-12-15 19:38:08.424898] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:21.723 [2024-12-15 19:38:08.425163] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88777 ] 00:17:21.723 [2024-12-15 19:38:08.554670] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.982 [2024-12-15 19:38:08.622444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:22.551 19:38:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:22.551 19:38:09 -- common/autotest_common.sh@862 -- # return 0 00:17:22.551 19:38:09 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:22.811 [2024-12-15 19:38:09.674859] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:22.811 [2024-12-15 19:38:09.684117] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:22.811 [2024-12-15 19:38:09.684152] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:22.811 [2024-12-15 19:38:09.684202] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:22.811 [2024-12-15 19:38:09.684318] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6007c0 (107): Transport endpoint is not connected 00:17:22.811 [2024-12-15 19:38:09.685310] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6007c0 (9): Bad file descriptor 00:17:22.811 [2024-12-15 19:38:09.686307] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:22.811 [2024-12-15 19:38:09.686343] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:22.811 [2024-12-15 19:38:09.686361] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:22.811 2024/12/15 19:38:09 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:22.811 request: 00:17:22.811 { 00:17:22.811 "method": "bdev_nvme_attach_controller", 00:17:22.811 "params": { 00:17:22.811 "name": "TLSTEST", 00:17:22.811 "trtype": "tcp", 00:17:22.811 "traddr": "10.0.0.2", 00:17:22.811 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:22.811 "adrfam": "ipv4", 00:17:22.811 "trsvcid": "4420", 00:17:22.811 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:22.811 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:17:22.811 } 00:17:22.811 } 00:17:22.811 Got JSON-RPC error response 00:17:22.811 GoRPCClient: error on JSON-RPC call 00:17:23.070 19:38:09 -- target/tls.sh@36 -- # killprocess 88777 00:17:23.070 19:38:09 -- common/autotest_common.sh@936 -- # '[' -z 88777 ']' 00:17:23.070 19:38:09 -- common/autotest_common.sh@940 -- # kill -0 88777 00:17:23.070 19:38:09 -- common/autotest_common.sh@941 -- # uname 00:17:23.070 19:38:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:23.070 19:38:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88777 00:17:23.070 killing process with pid 88777 00:17:23.070 Received shutdown signal, test time was about 10.000000 seconds 00:17:23.070 00:17:23.070 Latency(us) 00:17:23.070 [2024-12-15T19:38:09.966Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:23.070 [2024-12-15T19:38:09.966Z] =================================================================================================================== 00:17:23.070 [2024-12-15T19:38:09.966Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:23.070 19:38:09 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:23.070 19:38:09 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:23.070 19:38:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88777' 00:17:23.070 19:38:09 -- common/autotest_common.sh@955 -- # kill 88777 00:17:23.070 19:38:09 -- common/autotest_common.sh@960 -- # wait 88777 00:17:23.490 19:38:10 -- target/tls.sh@37 -- # return 1 00:17:23.490 19:38:10 -- common/autotest_common.sh@653 -- # es=1 00:17:23.490 19:38:10 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:23.490 19:38:10 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:23.490 19:38:10 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:23.490 19:38:10 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:23.490 19:38:10 -- common/autotest_common.sh@650 -- # local es=0 00:17:23.490 19:38:10 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:23.490 19:38:10 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:23.490 19:38:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:23.490 19:38:10 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:23.490 19:38:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:23.490 19:38:10 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:23.490 19:38:10 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:23.490 19:38:10 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:23.490 19:38:10 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:23.490 19:38:10 -- target/tls.sh@23 -- # psk= 00:17:23.490 19:38:10 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:23.490 19:38:10 -- target/tls.sh@28 -- # bdevperf_pid=88823 00:17:23.490 19:38:10 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:23.490 19:38:10 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:23.490 19:38:10 -- target/tls.sh@31 -- # waitforlisten 88823 /var/tmp/bdevperf.sock 00:17:23.490 19:38:10 -- common/autotest_common.sh@829 -- # '[' -z 88823 ']' 00:17:23.490 19:38:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:23.490 19:38:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:23.490 19:38:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:23.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:23.490 19:38:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:23.490 19:38:10 -- common/autotest_common.sh@10 -- # set +x 00:17:23.490 [2024-12-15 19:38:10.068518] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:23.490 [2024-12-15 19:38:10.068872] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88823 ] 00:17:23.490 [2024-12-15 19:38:10.206460] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.490 [2024-12-15 19:38:10.270368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:24.427 19:38:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:24.427 19:38:11 -- common/autotest_common.sh@862 -- # return 0 00:17:24.427 19:38:11 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:24.427 [2024-12-15 19:38:11.264394] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:24.427 [2024-12-15 19:38:11.266036] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b5090 (9): Bad file descriptor 00:17:24.427 [2024-12-15 19:38:11.267031] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:24.427 [2024-12-15 19:38:11.267065] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:24.427 [2024-12-15 19:38:11.267075] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:24.427 2024/12/15 19:38:11 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:24.427 request: 00:17:24.427 { 00:17:24.427 "method": "bdev_nvme_attach_controller", 00:17:24.427 "params": { 00:17:24.427 "name": "TLSTEST", 00:17:24.427 "trtype": "tcp", 00:17:24.427 "traddr": "10.0.0.2", 00:17:24.427 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:24.427 "adrfam": "ipv4", 00:17:24.427 "trsvcid": "4420", 00:17:24.427 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:17:24.427 } 00:17:24.427 } 00:17:24.427 Got JSON-RPC error response 00:17:24.427 GoRPCClient: error on JSON-RPC call 00:17:24.427 19:38:11 -- target/tls.sh@36 -- # killprocess 88823 00:17:24.427 19:38:11 -- common/autotest_common.sh@936 -- # '[' -z 88823 ']' 00:17:24.427 19:38:11 -- common/autotest_common.sh@940 -- # kill -0 88823 00:17:24.427 19:38:11 -- common/autotest_common.sh@941 -- # uname 00:17:24.427 19:38:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:24.427 19:38:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88823 00:17:24.427 killing process with pid 88823 00:17:24.427 Received shutdown signal, test time was about 10.000000 seconds 00:17:24.427 00:17:24.427 Latency(us) 00:17:24.427 [2024-12-15T19:38:11.323Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.427 [2024-12-15T19:38:11.323Z] =================================================================================================================== 00:17:24.427 [2024-12-15T19:38:11.323Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:24.427 19:38:11 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:24.427 19:38:11 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:24.427 19:38:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88823' 00:17:24.427 19:38:11 -- common/autotest_common.sh@955 -- # kill 88823 00:17:24.427 19:38:11 -- common/autotest_common.sh@960 -- # wait 88823 00:17:24.686 19:38:11 -- target/tls.sh@37 -- # return 1 00:17:24.686 19:38:11 -- common/autotest_common.sh@653 -- # es=1 00:17:24.686 19:38:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:24.686 19:38:11 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:24.686 19:38:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:24.686 19:38:11 -- target/tls.sh@167 -- # killprocess 88168 00:17:24.686 19:38:11 -- common/autotest_common.sh@936 -- # '[' -z 88168 ']' 00:17:24.686 19:38:11 -- common/autotest_common.sh@940 -- # kill -0 88168 00:17:24.686 19:38:11 -- common/autotest_common.sh@941 -- # uname 00:17:24.686 19:38:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:24.686 19:38:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88168 00:17:24.943 killing process with pid 88168 00:17:24.943 19:38:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:24.943 19:38:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:24.943 19:38:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88168' 00:17:24.943 19:38:11 -- common/autotest_common.sh@955 -- # kill 88168 00:17:24.943 19:38:11 -- common/autotest_common.sh@960 -- # wait 88168 00:17:25.217 19:38:11 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:17:25.218 19:38:11 -- target/tls.sh@49 -- # local key hash crc 00:17:25.218 19:38:11 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:25.218 19:38:11 -- target/tls.sh@51 -- # hash=02 00:17:25.218 19:38:11 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:17:25.218 19:38:11 -- target/tls.sh@52 -- # gzip -1 -c 00:17:25.218 19:38:11 -- target/tls.sh@52 -- # tail -c8 00:17:25.218 19:38:11 -- target/tls.sh@52 -- # head -c 4 00:17:25.218 19:38:11 -- target/tls.sh@52 -- # crc='�e�'\''' 00:17:25.218 19:38:11 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:17:25.218 19:38:11 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:17:25.218 19:38:11 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:25.218 19:38:11 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:25.218 19:38:11 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:25.218 19:38:11 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:25.218 19:38:11 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:25.218 19:38:11 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:17:25.218 19:38:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:25.218 19:38:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:25.218 19:38:11 -- common/autotest_common.sh@10 -- # set +x 00:17:25.218 19:38:11 -- nvmf/common.sh@469 -- # nvmfpid=88889 00:17:25.218 19:38:11 -- nvmf/common.sh@470 -- # waitforlisten 88889 00:17:25.218 19:38:11 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:25.218 19:38:11 -- common/autotest_common.sh@829 -- # '[' -z 88889 ']' 00:17:25.218 19:38:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.218 19:38:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:25.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.218 19:38:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.218 19:38:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:25.218 19:38:11 -- common/autotest_common.sh@10 -- # set +x 00:17:25.218 [2024-12-15 19:38:11.928983] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:25.218 [2024-12-15 19:38:11.929058] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:25.218 [2024-12-15 19:38:12.062208] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.476 [2024-12-15 19:38:12.143265] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:25.476 [2024-12-15 19:38:12.143406] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:25.476 [2024-12-15 19:38:12.143427] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:25.476 [2024-12-15 19:38:12.143435] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:25.476 [2024-12-15 19:38:12.143469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.043 19:38:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:26.043 19:38:12 -- common/autotest_common.sh@862 -- # return 0 00:17:26.043 19:38:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:26.043 19:38:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:26.043 19:38:12 -- common/autotest_common.sh@10 -- # set +x 00:17:26.043 19:38:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:26.043 19:38:12 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:26.043 19:38:12 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:26.043 19:38:12 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:26.611 [2024-12-15 19:38:13.200599] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:26.611 19:38:13 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:26.611 19:38:13 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:26.870 [2024-12-15 19:38:13.732673] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:26.870 [2024-12-15 19:38:13.732918] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:26.870 19:38:13 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:27.129 malloc0 00:17:27.129 19:38:13 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:27.388 19:38:14 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:27.647 19:38:14 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:27.647 19:38:14 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:27.647 19:38:14 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:27.647 19:38:14 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:27.647 19:38:14 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:17:27.647 19:38:14 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:27.647 19:38:14 -- target/tls.sh@28 -- # bdevperf_pid=88986 00:17:27.647 19:38:14 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:27.647 19:38:14 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:27.647 19:38:14 -- target/tls.sh@31 -- # waitforlisten 88986 /var/tmp/bdevperf.sock 00:17:27.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:27.647 19:38:14 -- common/autotest_common.sh@829 -- # '[' -z 88986 ']' 00:17:27.647 19:38:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:27.647 19:38:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:27.647 19:38:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:27.647 19:38:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:27.647 19:38:14 -- common/autotest_common.sh@10 -- # set +x 00:17:27.647 [2024-12-15 19:38:14.506493] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:27.647 [2024-12-15 19:38:14.506575] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88986 ] 00:17:27.906 [2024-12-15 19:38:14.643157] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.906 [2024-12-15 19:38:14.719489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:28.845 19:38:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:28.845 19:38:15 -- common/autotest_common.sh@862 -- # return 0 00:17:28.846 19:38:15 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:28.846 [2024-12-15 19:38:15.719644] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:29.104 TLSTESTn1 00:17:29.104 19:38:15 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:29.104 Running I/O for 10 seconds... 00:17:39.081 00:17:39.081 Latency(us) 00:17:39.081 [2024-12-15T19:38:25.977Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.081 [2024-12-15T19:38:25.977Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:39.081 Verification LBA range: start 0x0 length 0x2000 00:17:39.081 TLSTESTn1 : 10.02 6645.33 25.96 0.00 0.00 19231.09 4110.89 17039.36 00:17:39.081 [2024-12-15T19:38:25.977Z] =================================================================================================================== 00:17:39.081 [2024-12-15T19:38:25.977Z] Total : 6645.33 25.96 0.00 0.00 19231.09 4110.89 17039.36 00:17:39.081 0 00:17:39.081 19:38:25 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:39.081 19:38:25 -- target/tls.sh@45 -- # killprocess 88986 00:17:39.081 19:38:25 -- common/autotest_common.sh@936 -- # '[' -z 88986 ']' 00:17:39.081 19:38:25 -- common/autotest_common.sh@940 -- # kill -0 88986 00:17:39.081 19:38:25 -- common/autotest_common.sh@941 -- # uname 00:17:39.081 19:38:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:39.081 19:38:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88986 00:17:39.340 19:38:25 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:39.340 19:38:25 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:39.340 killing process with pid 88986 00:17:39.340 19:38:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88986' 00:17:39.340 Received shutdown signal, test time was about 10.000000 seconds 00:17:39.340 00:17:39.340 Latency(us) 00:17:39.340 [2024-12-15T19:38:26.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.340 [2024-12-15T19:38:26.236Z] =================================================================================================================== 00:17:39.340 [2024-12-15T19:38:26.236Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:39.340 19:38:25 -- common/autotest_common.sh@955 -- # kill 88986 00:17:39.340 19:38:25 -- common/autotest_common.sh@960 -- # wait 88986 00:17:39.599 19:38:26 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:39.599 19:38:26 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:39.599 19:38:26 -- common/autotest_common.sh@650 -- # local es=0 00:17:39.599 19:38:26 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:39.599 19:38:26 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:39.599 19:38:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:39.599 19:38:26 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:39.599 19:38:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:39.599 19:38:26 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:39.599 19:38:26 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:39.599 19:38:26 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:39.599 19:38:26 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:39.599 19:38:26 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:17:39.599 19:38:26 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:39.599 19:38:26 -- target/tls.sh@28 -- # bdevperf_pid=89140 00:17:39.599 19:38:26 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:39.599 19:38:26 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:39.599 19:38:26 -- target/tls.sh@31 -- # waitforlisten 89140 /var/tmp/bdevperf.sock 00:17:39.599 19:38:26 -- common/autotest_common.sh@829 -- # '[' -z 89140 ']' 00:17:39.599 19:38:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:39.599 19:38:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:39.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:39.599 19:38:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:39.599 19:38:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:39.599 19:38:26 -- common/autotest_common.sh@10 -- # set +x 00:17:39.599 [2024-12-15 19:38:26.304143] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:39.599 [2024-12-15 19:38:26.304264] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89140 ] 00:17:39.599 [2024-12-15 19:38:26.443025] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.858 [2024-12-15 19:38:26.498013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:40.425 19:38:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:40.425 19:38:27 -- common/autotest_common.sh@862 -- # return 0 00:17:40.425 19:38:27 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:40.684 [2024-12-15 19:38:27.565938] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:40.684 [2024-12-15 19:38:27.565986] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:40.684 2024/12/15 19:38:27 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-22 Msg=Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:40.684 request: 00:17:40.684 { 00:17:40.684 "method": "bdev_nvme_attach_controller", 00:17:40.684 "params": { 00:17:40.684 "name": "TLSTEST", 00:17:40.684 "trtype": "tcp", 00:17:40.684 "traddr": "10.0.0.2", 00:17:40.684 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:40.684 "adrfam": "ipv4", 00:17:40.684 "trsvcid": "4420", 00:17:40.684 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:40.684 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:40.684 } 00:17:40.684 } 00:17:40.684 Got JSON-RPC error response 00:17:40.684 GoRPCClient: error on JSON-RPC call 00:17:40.943 19:38:27 -- target/tls.sh@36 -- # killprocess 89140 00:17:40.943 19:38:27 -- common/autotest_common.sh@936 -- # '[' -z 89140 ']' 00:17:40.943 19:38:27 -- common/autotest_common.sh@940 -- # kill -0 89140 00:17:40.943 19:38:27 -- common/autotest_common.sh@941 -- # uname 00:17:40.943 19:38:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:40.943 19:38:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89140 00:17:40.943 19:38:27 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:40.943 19:38:27 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:40.943 killing process with pid 89140 00:17:40.943 19:38:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89140' 00:17:40.943 Received shutdown signal, test time was about 10.000000 seconds 00:17:40.943 00:17:40.943 Latency(us) 00:17:40.943 [2024-12-15T19:38:27.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.943 [2024-12-15T19:38:27.839Z] =================================================================================================================== 00:17:40.943 [2024-12-15T19:38:27.839Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:40.943 19:38:27 -- common/autotest_common.sh@955 -- # kill 89140 00:17:40.943 19:38:27 -- common/autotest_common.sh@960 -- # wait 89140 00:17:41.201 19:38:27 -- target/tls.sh@37 -- # return 1 00:17:41.201 19:38:27 -- common/autotest_common.sh@653 -- # es=1 00:17:41.201 19:38:27 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:41.201 19:38:27 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:41.201 19:38:27 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:41.201 19:38:27 -- target/tls.sh@183 -- # killprocess 88889 00:17:41.201 19:38:27 -- common/autotest_common.sh@936 -- # '[' -z 88889 ']' 00:17:41.201 19:38:27 -- common/autotest_common.sh@940 -- # kill -0 88889 00:17:41.201 19:38:27 -- common/autotest_common.sh@941 -- # uname 00:17:41.201 19:38:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:41.201 19:38:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88889 00:17:41.201 19:38:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:41.201 19:38:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:41.201 killing process with pid 88889 00:17:41.201 19:38:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88889' 00:17:41.202 19:38:27 -- common/autotest_common.sh@955 -- # kill 88889 00:17:41.202 19:38:27 -- common/autotest_common.sh@960 -- # wait 88889 00:17:41.460 19:38:28 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:41.460 19:38:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:41.460 19:38:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:41.460 19:38:28 -- common/autotest_common.sh@10 -- # set +x 00:17:41.460 19:38:28 -- nvmf/common.sh@469 -- # nvmfpid=89197 00:17:41.460 19:38:28 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:41.460 19:38:28 -- nvmf/common.sh@470 -- # waitforlisten 89197 00:17:41.460 19:38:28 -- common/autotest_common.sh@829 -- # '[' -z 89197 ']' 00:17:41.460 19:38:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.460 19:38:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:41.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.460 19:38:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.460 19:38:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:41.460 19:38:28 -- common/autotest_common.sh@10 -- # set +x 00:17:41.460 [2024-12-15 19:38:28.222197] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:41.460 [2024-12-15 19:38:28.222266] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:41.460 [2024-12-15 19:38:28.350027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.718 [2024-12-15 19:38:28.411772] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:41.718 [2024-12-15 19:38:28.411968] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:41.718 [2024-12-15 19:38:28.411982] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:41.718 [2024-12-15 19:38:28.411990] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:41.718 [2024-12-15 19:38:28.412016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:42.285 19:38:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:42.285 19:38:29 -- common/autotest_common.sh@862 -- # return 0 00:17:42.285 19:38:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:42.285 19:38:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:42.285 19:38:29 -- common/autotest_common.sh@10 -- # set +x 00:17:42.544 19:38:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:42.544 19:38:29 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:42.544 19:38:29 -- common/autotest_common.sh@650 -- # local es=0 00:17:42.544 19:38:29 -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:42.544 19:38:29 -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:17:42.544 19:38:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:42.544 19:38:29 -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:17:42.544 19:38:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:42.544 19:38:29 -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:42.544 19:38:29 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:42.544 19:38:29 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:42.802 [2024-12-15 19:38:29.480995] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:42.802 19:38:29 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:43.061 19:38:29 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:43.061 [2024-12-15 19:38:29.873060] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:43.061 [2024-12-15 19:38:29.873311] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:43.061 19:38:29 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:43.320 malloc0 00:17:43.320 19:38:30 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:43.579 19:38:30 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:43.837 [2024-12-15 19:38:30.495197] tcp.c:3551:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:43.837 [2024-12-15 19:38:30.495235] tcp.c:3620:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:43.837 [2024-12-15 19:38:30.495251] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:17:43.837 2024/12/15 19:38:30 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:17:43.837 request: 00:17:43.837 { 00:17:43.837 "method": "nvmf_subsystem_add_host", 00:17:43.837 "params": { 00:17:43.837 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:43.837 "host": "nqn.2016-06.io.spdk:host1", 00:17:43.837 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:43.837 } 00:17:43.837 } 00:17:43.837 Got JSON-RPC error response 00:17:43.837 GoRPCClient: error on JSON-RPC call 00:17:43.837 19:38:30 -- common/autotest_common.sh@653 -- # es=1 00:17:43.837 19:38:30 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:43.837 19:38:30 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:43.837 19:38:30 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:43.837 19:38:30 -- target/tls.sh@189 -- # killprocess 89197 00:17:43.837 19:38:30 -- common/autotest_common.sh@936 -- # '[' -z 89197 ']' 00:17:43.837 19:38:30 -- common/autotest_common.sh@940 -- # kill -0 89197 00:17:43.837 19:38:30 -- common/autotest_common.sh@941 -- # uname 00:17:43.837 19:38:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:43.838 19:38:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89197 00:17:43.838 19:38:30 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:43.838 19:38:30 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:43.838 killing process with pid 89197 00:17:43.838 19:38:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89197' 00:17:43.838 19:38:30 -- common/autotest_common.sh@955 -- # kill 89197 00:17:43.838 19:38:30 -- common/autotest_common.sh@960 -- # wait 89197 00:17:44.097 19:38:30 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:44.097 19:38:30 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:17:44.097 19:38:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:44.097 19:38:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:44.097 19:38:30 -- common/autotest_common.sh@10 -- # set +x 00:17:44.097 19:38:30 -- nvmf/common.sh@469 -- # nvmfpid=89303 00:17:44.097 19:38:30 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:44.097 19:38:30 -- nvmf/common.sh@470 -- # waitforlisten 89303 00:17:44.097 19:38:30 -- common/autotest_common.sh@829 -- # '[' -z 89303 ']' 00:17:44.097 19:38:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.097 19:38:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:44.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.097 19:38:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.097 19:38:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:44.097 19:38:30 -- common/autotest_common.sh@10 -- # set +x 00:17:44.097 [2024-12-15 19:38:30.880065] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:44.097 [2024-12-15 19:38:30.880170] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:44.389 [2024-12-15 19:38:31.019319] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.389 [2024-12-15 19:38:31.084584] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:44.389 [2024-12-15 19:38:31.084744] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:44.389 [2024-12-15 19:38:31.084757] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:44.389 [2024-12-15 19:38:31.084765] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:44.389 [2024-12-15 19:38:31.084798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:44.957 19:38:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:44.957 19:38:31 -- common/autotest_common.sh@862 -- # return 0 00:17:44.957 19:38:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:44.957 19:38:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:44.957 19:38:31 -- common/autotest_common.sh@10 -- # set +x 00:17:45.216 19:38:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:45.216 19:38:31 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:45.216 19:38:31 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:45.216 19:38:31 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:45.475 [2024-12-15 19:38:32.147436] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:45.475 19:38:32 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:45.734 19:38:32 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:45.993 [2024-12-15 19:38:32.715509] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:45.993 [2024-12-15 19:38:32.715722] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:45.993 19:38:32 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:46.252 malloc0 00:17:46.252 19:38:33 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:46.511 19:38:33 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:46.770 19:38:33 -- target/tls.sh@197 -- # bdevperf_pid=89406 00:17:46.771 19:38:33 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:46.771 19:38:33 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:46.771 19:38:33 -- target/tls.sh@200 -- # waitforlisten 89406 /var/tmp/bdevperf.sock 00:17:46.771 19:38:33 -- common/autotest_common.sh@829 -- # '[' -z 89406 ']' 00:17:46.771 19:38:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:46.771 19:38:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:46.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:46.771 19:38:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:46.771 19:38:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:46.771 19:38:33 -- common/autotest_common.sh@10 -- # set +x 00:17:46.771 [2024-12-15 19:38:33.633558] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:46.771 [2024-12-15 19:38:33.633666] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89406 ] 00:17:47.030 [2024-12-15 19:38:33.777321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.030 [2024-12-15 19:38:33.853983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:47.967 19:38:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:47.967 19:38:34 -- common/autotest_common.sh@862 -- # return 0 00:17:47.967 19:38:34 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:47.967 [2024-12-15 19:38:34.835210] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:48.226 TLSTESTn1 00:17:48.226 19:38:34 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:17:48.485 19:38:35 -- target/tls.sh@205 -- # tgtconf='{ 00:17:48.485 "subsystems": [ 00:17:48.485 { 00:17:48.485 "subsystem": "iobuf", 00:17:48.485 "config": [ 00:17:48.485 { 00:17:48.485 "method": "iobuf_set_options", 00:17:48.485 "params": { 00:17:48.485 "large_bufsize": 135168, 00:17:48.485 "large_pool_count": 1024, 00:17:48.485 "small_bufsize": 8192, 00:17:48.485 "small_pool_count": 8192 00:17:48.485 } 00:17:48.485 } 00:17:48.485 ] 00:17:48.485 }, 00:17:48.485 { 00:17:48.485 "subsystem": "sock", 00:17:48.485 "config": [ 00:17:48.485 { 00:17:48.485 "method": "sock_impl_set_options", 00:17:48.485 "params": { 00:17:48.485 "enable_ktls": false, 00:17:48.485 "enable_placement_id": 0, 00:17:48.485 "enable_quickack": false, 00:17:48.485 "enable_recv_pipe": true, 00:17:48.485 "enable_zerocopy_send_client": false, 00:17:48.485 "enable_zerocopy_send_server": true, 00:17:48.485 "impl_name": "posix", 00:17:48.485 "recv_buf_size": 2097152, 00:17:48.485 "send_buf_size": 2097152, 00:17:48.485 "tls_version": 0, 00:17:48.485 "zerocopy_threshold": 0 00:17:48.485 } 00:17:48.485 }, 00:17:48.485 { 00:17:48.485 "method": "sock_impl_set_options", 00:17:48.485 "params": { 00:17:48.485 "enable_ktls": false, 00:17:48.485 "enable_placement_id": 0, 00:17:48.485 "enable_quickack": false, 00:17:48.485 "enable_recv_pipe": true, 00:17:48.485 "enable_zerocopy_send_client": false, 00:17:48.485 "enable_zerocopy_send_server": true, 00:17:48.485 "impl_name": "ssl", 00:17:48.485 "recv_buf_size": 4096, 00:17:48.485 "send_buf_size": 4096, 00:17:48.485 "tls_version": 0, 00:17:48.486 "zerocopy_threshold": 0 00:17:48.486 } 00:17:48.486 } 00:17:48.486 ] 00:17:48.486 }, 00:17:48.486 { 00:17:48.486 "subsystem": "vmd", 00:17:48.486 "config": [] 00:17:48.486 }, 00:17:48.486 { 00:17:48.486 "subsystem": "accel", 00:17:48.486 "config": [ 00:17:48.486 { 00:17:48.486 "method": "accel_set_options", 00:17:48.486 "params": { 00:17:48.486 "buf_count": 2048, 00:17:48.486 "large_cache_size": 16, 00:17:48.486 "sequence_count": 2048, 00:17:48.486 "small_cache_size": 128, 00:17:48.486 "task_count": 2048 00:17:48.486 } 00:17:48.486 } 00:17:48.486 ] 00:17:48.486 }, 00:17:48.486 { 00:17:48.486 "subsystem": "bdev", 00:17:48.486 "config": [ 00:17:48.486 { 00:17:48.486 "method": "bdev_set_options", 00:17:48.486 "params": { 00:17:48.486 "bdev_auto_examine": true, 00:17:48.486 "bdev_io_cache_size": 256, 00:17:48.486 "bdev_io_pool_size": 65535, 00:17:48.486 "iobuf_large_cache_size": 16, 00:17:48.486 "iobuf_small_cache_size": 128 00:17:48.486 } 00:17:48.486 }, 00:17:48.486 { 00:17:48.486 "method": "bdev_raid_set_options", 00:17:48.486 "params": { 00:17:48.486 "process_window_size_kb": 1024 00:17:48.486 } 00:17:48.486 }, 00:17:48.486 { 00:17:48.486 "method": "bdev_iscsi_set_options", 00:17:48.486 "params": { 00:17:48.486 "timeout_sec": 30 00:17:48.486 } 00:17:48.486 }, 00:17:48.486 { 00:17:48.486 "method": "bdev_nvme_set_options", 00:17:48.486 "params": { 00:17:48.486 "action_on_timeout": "none", 00:17:48.486 "allow_accel_sequence": false, 00:17:48.486 "arbitration_burst": 0, 00:17:48.486 "bdev_retry_count": 3, 00:17:48.486 "ctrlr_loss_timeout_sec": 0, 00:17:48.486 "delay_cmd_submit": true, 00:17:48.486 "fast_io_fail_timeout_sec": 0, 00:17:48.486 "generate_uuids": false, 00:17:48.486 "high_priority_weight": 0, 00:17:48.486 "io_path_stat": false, 00:17:48.486 "io_queue_requests": 0, 00:17:48.486 "keep_alive_timeout_ms": 10000, 00:17:48.486 "low_priority_weight": 0, 00:17:48.486 "medium_priority_weight": 0, 00:17:48.486 "nvme_adminq_poll_period_us": 10000, 00:17:48.486 "nvme_ioq_poll_period_us": 0, 00:17:48.486 "reconnect_delay_sec": 0, 00:17:48.486 "timeout_admin_us": 0, 00:17:48.486 "timeout_us": 0, 00:17:48.486 "transport_ack_timeout": 0, 00:17:48.486 "transport_retry_count": 4, 00:17:48.486 "transport_tos": 0 00:17:48.486 } 00:17:48.486 }, 00:17:48.486 { 00:17:48.486 "method": "bdev_nvme_set_hotplug", 00:17:48.486 "params": { 00:17:48.486 "enable": false, 00:17:48.486 "period_us": 100000 00:17:48.486 } 00:17:48.486 }, 00:17:48.486 { 00:17:48.486 "method": "bdev_malloc_create", 00:17:48.486 "params": { 00:17:48.486 "block_size": 4096, 00:17:48.486 "name": "malloc0", 00:17:48.486 "num_blocks": 8192, 00:17:48.486 "optimal_io_boundary": 0, 00:17:48.486 "physical_block_size": 4096, 00:17:48.486 "uuid": "9d161aff-7635-4084-b9d7-ca187b1b6ab2" 00:17:48.486 } 00:17:48.486 }, 00:17:48.486 { 00:17:48.486 "method": "bdev_wait_for_examine" 00:17:48.486 } 00:17:48.486 ] 00:17:48.486 }, 00:17:48.486 { 00:17:48.486 "subsystem": "nbd", 00:17:48.486 "config": [] 00:17:48.486 }, 00:17:48.486 { 00:17:48.486 "subsystem": "scheduler", 00:17:48.486 "config": [ 00:17:48.486 { 00:17:48.486 "method": "framework_set_scheduler", 00:17:48.486 "params": { 00:17:48.486 "name": "static" 00:17:48.486 } 00:17:48.486 } 00:17:48.486 ] 00:17:48.486 }, 00:17:48.486 { 00:17:48.486 "subsystem": "nvmf", 00:17:48.486 "config": [ 00:17:48.486 { 00:17:48.486 "method": "nvmf_set_config", 00:17:48.486 "params": { 00:17:48.486 "admin_cmd_passthru": { 00:17:48.486 "identify_ctrlr": false 00:17:48.486 }, 00:17:48.486 "discovery_filter": "match_any" 00:17:48.486 } 00:17:48.486 }, 00:17:48.486 { 00:17:48.486 "method": "nvmf_set_max_subsystems", 00:17:48.486 "params": { 00:17:48.486 "max_subsystems": 1024 00:17:48.486 } 00:17:48.486 }, 00:17:48.486 { 00:17:48.486 "method": "nvmf_set_crdt", 00:17:48.486 "params": { 00:17:48.486 "crdt1": 0, 00:17:48.486 "crdt2": 0, 00:17:48.486 "crdt3": 0 00:17:48.486 } 00:17:48.486 }, 00:17:48.486 { 00:17:48.486 "method": "nvmf_create_transport", 00:17:48.486 "params": { 00:17:48.486 "abort_timeout_sec": 1, 00:17:48.486 "buf_cache_size": 4294967295, 00:17:48.486 "c2h_success": false, 00:17:48.486 "dif_insert_or_strip": false, 00:17:48.486 "in_capsule_data_size": 4096, 00:17:48.486 "io_unit_size": 131072, 00:17:48.486 "max_aq_depth": 128, 00:17:48.486 "max_io_qpairs_per_ctrlr": 127, 00:17:48.486 "max_io_size": 131072, 00:17:48.486 "max_queue_depth": 128, 00:17:48.486 "num_shared_buffers": 511, 00:17:48.486 "sock_priority": 0, 00:17:48.486 "trtype": "TCP", 00:17:48.486 "zcopy": false 00:17:48.486 } 00:17:48.486 }, 00:17:48.486 { 00:17:48.486 "method": "nvmf_create_subsystem", 00:17:48.486 "params": { 00:17:48.486 "allow_any_host": false, 00:17:48.486 "ana_reporting": false, 00:17:48.486 "max_cntlid": 65519, 00:17:48.486 "max_namespaces": 10, 00:17:48.486 "min_cntlid": 1, 00:17:48.486 "model_number": "SPDK bdev Controller", 00:17:48.486 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:48.486 "serial_number": "SPDK00000000000001" 00:17:48.486 } 00:17:48.486 }, 00:17:48.486 { 00:17:48.486 "method": "nvmf_subsystem_add_host", 00:17:48.486 "params": { 00:17:48.486 "host": "nqn.2016-06.io.spdk:host1", 00:17:48.486 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:48.486 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:48.486 } 00:17:48.486 }, 00:17:48.486 { 00:17:48.486 "method": "nvmf_subsystem_add_ns", 00:17:48.486 "params": { 00:17:48.486 "namespace": { 00:17:48.486 "bdev_name": "malloc0", 00:17:48.486 "nguid": "9D161AFF76354084B9D7CA187B1B6AB2", 00:17:48.486 "nsid": 1, 00:17:48.486 "uuid": "9d161aff-7635-4084-b9d7-ca187b1b6ab2" 00:17:48.487 }, 00:17:48.487 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:48.487 } 00:17:48.487 }, 00:17:48.487 { 00:17:48.487 "method": "nvmf_subsystem_add_listener", 00:17:48.487 "params": { 00:17:48.487 "listen_address": { 00:17:48.487 "adrfam": "IPv4", 00:17:48.487 "traddr": "10.0.0.2", 00:17:48.487 "trsvcid": "4420", 00:17:48.487 "trtype": "TCP" 00:17:48.487 }, 00:17:48.487 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:48.487 "secure_channel": true 00:17:48.487 } 00:17:48.487 } 00:17:48.487 ] 00:17:48.487 } 00:17:48.487 ] 00:17:48.487 }' 00:17:48.487 19:38:35 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:48.745 19:38:35 -- target/tls.sh@206 -- # bdevperfconf='{ 00:17:48.745 "subsystems": [ 00:17:48.745 { 00:17:48.745 "subsystem": "iobuf", 00:17:48.745 "config": [ 00:17:48.745 { 00:17:48.746 "method": "iobuf_set_options", 00:17:48.746 "params": { 00:17:48.746 "large_bufsize": 135168, 00:17:48.746 "large_pool_count": 1024, 00:17:48.746 "small_bufsize": 8192, 00:17:48.746 "small_pool_count": 8192 00:17:48.746 } 00:17:48.746 } 00:17:48.746 ] 00:17:48.746 }, 00:17:48.746 { 00:17:48.746 "subsystem": "sock", 00:17:48.746 "config": [ 00:17:48.746 { 00:17:48.746 "method": "sock_impl_set_options", 00:17:48.746 "params": { 00:17:48.746 "enable_ktls": false, 00:17:48.746 "enable_placement_id": 0, 00:17:48.746 "enable_quickack": false, 00:17:48.746 "enable_recv_pipe": true, 00:17:48.746 "enable_zerocopy_send_client": false, 00:17:48.746 "enable_zerocopy_send_server": true, 00:17:48.746 "impl_name": "posix", 00:17:48.746 "recv_buf_size": 2097152, 00:17:48.746 "send_buf_size": 2097152, 00:17:48.746 "tls_version": 0, 00:17:48.746 "zerocopy_threshold": 0 00:17:48.746 } 00:17:48.746 }, 00:17:48.746 { 00:17:48.746 "method": "sock_impl_set_options", 00:17:48.746 "params": { 00:17:48.746 "enable_ktls": false, 00:17:48.746 "enable_placement_id": 0, 00:17:48.746 "enable_quickack": false, 00:17:48.746 "enable_recv_pipe": true, 00:17:48.746 "enable_zerocopy_send_client": false, 00:17:48.746 "enable_zerocopy_send_server": true, 00:17:48.746 "impl_name": "ssl", 00:17:48.746 "recv_buf_size": 4096, 00:17:48.746 "send_buf_size": 4096, 00:17:48.746 "tls_version": 0, 00:17:48.746 "zerocopy_threshold": 0 00:17:48.746 } 00:17:48.746 } 00:17:48.746 ] 00:17:48.746 }, 00:17:48.746 { 00:17:48.746 "subsystem": "vmd", 00:17:48.746 "config": [] 00:17:48.746 }, 00:17:48.746 { 00:17:48.746 "subsystem": "accel", 00:17:48.746 "config": [ 00:17:48.746 { 00:17:48.746 "method": "accel_set_options", 00:17:48.746 "params": { 00:17:48.746 "buf_count": 2048, 00:17:48.746 "large_cache_size": 16, 00:17:48.746 "sequence_count": 2048, 00:17:48.746 "small_cache_size": 128, 00:17:48.746 "task_count": 2048 00:17:48.746 } 00:17:48.746 } 00:17:48.746 ] 00:17:48.746 }, 00:17:48.746 { 00:17:48.746 "subsystem": "bdev", 00:17:48.746 "config": [ 00:17:48.746 { 00:17:48.746 "method": "bdev_set_options", 00:17:48.746 "params": { 00:17:48.746 "bdev_auto_examine": true, 00:17:48.746 "bdev_io_cache_size": 256, 00:17:48.746 "bdev_io_pool_size": 65535, 00:17:48.746 "iobuf_large_cache_size": 16, 00:17:48.746 "iobuf_small_cache_size": 128 00:17:48.746 } 00:17:48.746 }, 00:17:48.746 { 00:17:48.746 "method": "bdev_raid_set_options", 00:17:48.746 "params": { 00:17:48.746 "process_window_size_kb": 1024 00:17:48.746 } 00:17:48.746 }, 00:17:48.746 { 00:17:48.746 "method": "bdev_iscsi_set_options", 00:17:48.746 "params": { 00:17:48.746 "timeout_sec": 30 00:17:48.746 } 00:17:48.746 }, 00:17:48.746 { 00:17:48.746 "method": "bdev_nvme_set_options", 00:17:48.746 "params": { 00:17:48.746 "action_on_timeout": "none", 00:17:48.746 "allow_accel_sequence": false, 00:17:48.746 "arbitration_burst": 0, 00:17:48.746 "bdev_retry_count": 3, 00:17:48.746 "ctrlr_loss_timeout_sec": 0, 00:17:48.746 "delay_cmd_submit": true, 00:17:48.746 "fast_io_fail_timeout_sec": 0, 00:17:48.746 "generate_uuids": false, 00:17:48.746 "high_priority_weight": 0, 00:17:48.746 "io_path_stat": false, 00:17:48.746 "io_queue_requests": 512, 00:17:48.746 "keep_alive_timeout_ms": 10000, 00:17:48.746 "low_priority_weight": 0, 00:17:48.746 "medium_priority_weight": 0, 00:17:48.746 "nvme_adminq_poll_period_us": 10000, 00:17:48.746 "nvme_ioq_poll_period_us": 0, 00:17:48.746 "reconnect_delay_sec": 0, 00:17:48.746 "timeout_admin_us": 0, 00:17:48.746 "timeout_us": 0, 00:17:48.746 "transport_ack_timeout": 0, 00:17:48.746 "transport_retry_count": 4, 00:17:48.746 "transport_tos": 0 00:17:48.746 } 00:17:48.746 }, 00:17:48.746 { 00:17:48.746 "method": "bdev_nvme_attach_controller", 00:17:48.746 "params": { 00:17:48.746 "adrfam": "IPv4", 00:17:48.746 "ctrlr_loss_timeout_sec": 0, 00:17:48.746 "ddgst": false, 00:17:48.746 "fast_io_fail_timeout_sec": 0, 00:17:48.746 "hdgst": false, 00:17:48.746 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:48.746 "name": "TLSTEST", 00:17:48.746 "prchk_guard": false, 00:17:48.746 "prchk_reftag": false, 00:17:48.746 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:17:48.746 "reconnect_delay_sec": 0, 00:17:48.746 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:48.746 "traddr": "10.0.0.2", 00:17:48.746 "trsvcid": "4420", 00:17:48.746 "trtype": "TCP" 00:17:48.746 } 00:17:48.746 }, 00:17:48.746 { 00:17:48.746 "method": "bdev_nvme_set_hotplug", 00:17:48.746 "params": { 00:17:48.746 "enable": false, 00:17:48.746 "period_us": 100000 00:17:48.746 } 00:17:48.746 }, 00:17:48.746 { 00:17:48.746 "method": "bdev_wait_for_examine" 00:17:48.746 } 00:17:48.746 ] 00:17:48.746 }, 00:17:48.746 { 00:17:48.746 "subsystem": "nbd", 00:17:48.746 "config": [] 00:17:48.746 } 00:17:48.746 ] 00:17:48.746 }' 00:17:48.746 19:38:35 -- target/tls.sh@208 -- # killprocess 89406 00:17:48.746 19:38:35 -- common/autotest_common.sh@936 -- # '[' -z 89406 ']' 00:17:48.746 19:38:35 -- common/autotest_common.sh@940 -- # kill -0 89406 00:17:48.746 19:38:35 -- common/autotest_common.sh@941 -- # uname 00:17:48.746 19:38:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:48.746 19:38:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89406 00:17:48.746 19:38:35 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:48.746 19:38:35 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:48.746 killing process with pid 89406 00:17:48.746 19:38:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89406' 00:17:48.746 19:38:35 -- common/autotest_common.sh@955 -- # kill 89406 00:17:48.746 Received shutdown signal, test time was about 10.000000 seconds 00:17:48.746 00:17:48.746 Latency(us) 00:17:48.746 [2024-12-15T19:38:35.642Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.746 [2024-12-15T19:38:35.642Z] =================================================================================================================== 00:17:48.746 [2024-12-15T19:38:35.642Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:48.746 19:38:35 -- common/autotest_common.sh@960 -- # wait 89406 00:17:49.005 19:38:35 -- target/tls.sh@209 -- # killprocess 89303 00:17:49.005 19:38:35 -- common/autotest_common.sh@936 -- # '[' -z 89303 ']' 00:17:49.005 19:38:35 -- common/autotest_common.sh@940 -- # kill -0 89303 00:17:49.005 19:38:35 -- common/autotest_common.sh@941 -- # uname 00:17:49.005 19:38:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:49.005 19:38:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89303 00:17:49.005 19:38:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:49.005 19:38:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:49.005 killing process with pid 89303 00:17:49.005 19:38:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89303' 00:17:49.005 19:38:35 -- common/autotest_common.sh@955 -- # kill 89303 00:17:49.005 19:38:35 -- common/autotest_common.sh@960 -- # wait 89303 00:17:49.265 19:38:36 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:49.265 19:38:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:49.265 19:38:36 -- target/tls.sh@212 -- # echo '{ 00:17:49.265 "subsystems": [ 00:17:49.265 { 00:17:49.265 "subsystem": "iobuf", 00:17:49.265 "config": [ 00:17:49.265 { 00:17:49.265 "method": "iobuf_set_options", 00:17:49.265 "params": { 00:17:49.265 "large_bufsize": 135168, 00:17:49.265 "large_pool_count": 1024, 00:17:49.265 "small_bufsize": 8192, 00:17:49.265 "small_pool_count": 8192 00:17:49.265 } 00:17:49.265 } 00:17:49.265 ] 00:17:49.265 }, 00:17:49.265 { 00:17:49.265 "subsystem": "sock", 00:17:49.265 "config": [ 00:17:49.265 { 00:17:49.265 "method": "sock_impl_set_options", 00:17:49.265 "params": { 00:17:49.265 "enable_ktls": false, 00:17:49.265 "enable_placement_id": 0, 00:17:49.265 "enable_quickack": false, 00:17:49.265 "enable_recv_pipe": true, 00:17:49.265 "enable_zerocopy_send_client": false, 00:17:49.265 "enable_zerocopy_send_server": true, 00:17:49.265 "impl_name": "posix", 00:17:49.265 "recv_buf_size": 2097152, 00:17:49.265 "send_buf_size": 2097152, 00:17:49.265 "tls_version": 0, 00:17:49.265 "zerocopy_threshold": 0 00:17:49.265 } 00:17:49.265 }, 00:17:49.265 { 00:17:49.265 "method": "sock_impl_set_options", 00:17:49.265 "params": { 00:17:49.265 "enable_ktls": false, 00:17:49.265 "enable_placement_id": 0, 00:17:49.265 "enable_quickack": false, 00:17:49.265 "enable_recv_pipe": true, 00:17:49.265 "enable_zerocopy_send_client": false, 00:17:49.265 "enable_zerocopy_send_server": true, 00:17:49.265 "impl_name": "ssl", 00:17:49.265 "recv_buf_size": 4096, 00:17:49.265 "send_buf_size": 4096, 00:17:49.265 "tls_version": 0, 00:17:49.265 "zerocopy_threshold": 0 00:17:49.265 } 00:17:49.265 } 00:17:49.265 ] 00:17:49.265 }, 00:17:49.265 { 00:17:49.265 "subsystem": "vmd", 00:17:49.265 "config": [] 00:17:49.265 }, 00:17:49.265 { 00:17:49.265 "subsystem": "accel", 00:17:49.265 "config": [ 00:17:49.265 { 00:17:49.265 "method": "accel_set_options", 00:17:49.265 "params": { 00:17:49.266 "buf_count": 2048, 00:17:49.266 "large_cache_size": 16, 00:17:49.266 "sequence_count": 2048, 00:17:49.266 "small_cache_size": 128, 00:17:49.266 "task_count": 2048 00:17:49.266 } 00:17:49.266 } 00:17:49.266 ] 00:17:49.266 }, 00:17:49.266 { 00:17:49.266 "subsystem": "bdev", 00:17:49.266 "config": [ 00:17:49.266 { 00:17:49.266 "method": "bdev_set_options", 00:17:49.266 "params": { 00:17:49.266 "bdev_auto_examine": true, 00:17:49.266 "bdev_io_cache_size": 256, 00:17:49.266 "bdev_io_pool_size": 65535, 00:17:49.266 "iobuf_large_cache_size": 16, 00:17:49.266 "iobuf_small_cache_size": 128 00:17:49.266 } 00:17:49.266 }, 00:17:49.266 { 00:17:49.266 "method": "bdev_raid_set_options", 00:17:49.266 "params": { 00:17:49.266 "process_window_size_kb": 1024 00:17:49.266 } 00:17:49.266 }, 00:17:49.266 { 00:17:49.266 "method": "bdev_iscsi_set_options", 00:17:49.266 "params": { 00:17:49.266 "timeout_sec": 30 00:17:49.266 } 00:17:49.266 }, 00:17:49.266 { 00:17:49.266 "method": "bdev_nvme_set_options", 00:17:49.266 "params": { 00:17:49.266 "action_on_timeout": "none", 00:17:49.266 "allow_accel_sequence": false, 00:17:49.266 "arbitration_burst": 0, 00:17:49.266 "bdev_retry_count": 3, 00:17:49.266 "ctrlr_loss_timeout_sec": 0, 00:17:49.266 "delay_cmd_submit": true, 00:17:49.266 "fast_io_fail_timeout_sec": 0, 00:17:49.266 "generate_uuids": false, 00:17:49.266 "high_priority_weight": 0, 00:17:49.266 "io_path_stat": false, 00:17:49.266 "io_queue_requests": 0, 00:17:49.266 "keep_alive_timeout_ms": 10000, 00:17:49.266 "low_priority_weight": 0, 00:17:49.266 "medium_priority_weight": 0, 00:17:49.266 "nvme_adminq_poll_period_us": 10000, 00:17:49.266 "nvme_ioq_poll_period_us": 0, 00:17:49.266 "reconnect_delay_sec": 0, 00:17:49.266 "timeout_admin_us": 0, 00:17:49.266 "timeout_us": 0, 00:17:49.266 "transport_ack_timeout": 0, 00:17:49.266 "transport_retry_count": 4, 00:17:49.266 "transport_tos": 0 00:17:49.266 } 00:17:49.266 }, 00:17:49.266 { 00:17:49.266 "method": "bdev_nvme_set_hotplug", 00:17:49.266 "params": { 00:17:49.266 "enable": false, 00:17:49.266 "period_us": 100000 00:17:49.266 } 00:17:49.266 }, 00:17:49.266 { 00:17:49.266 "method": "bdev_malloc_create", 00:17:49.266 "params": { 00:17:49.266 "block_size": 4096, 00:17:49.266 "name": "malloc0", 00:17:49.266 "num_blocks": 8192, 00:17:49.266 "optimal_io_boundary": 0, 00:17:49.266 "physical_block_size": 4096, 00:17:49.266 "uuid": "9d161aff-7635-4084-b9d7-ca187b1b6ab2" 00:17:49.266 } 00:17:49.266 }, 00:17:49.266 { 00:17:49.266 "method": "bdev_wait_for_examine" 00:17:49.266 } 00:17:49.266 ] 00:17:49.266 }, 00:17:49.266 { 00:17:49.266 "subsystem": "nbd", 00:17:49.266 "config": [] 00:17:49.266 }, 00:17:49.266 { 00:17:49.266 "subsystem": "scheduler", 00:17:49.266 "config": [ 00:17:49.266 { 00:17:49.266 "method": "framework_set_scheduler", 00:17:49.266 "params": { 00:17:49.266 "name": "static" 00:17:49.266 } 00:17:49.266 } 00:17:49.266 ] 00:17:49.266 }, 00:17:49.266 { 00:17:49.266 "subsystem": "nvmf", 00:17:49.266 "config": [ 00:17:49.266 { 00:17:49.266 "method": "nvmf_set_config", 00:17:49.266 "params": { 00:17:49.266 "admin_cmd_passthru": { 00:17:49.266 "identify_ctrlr": false 00:17:49.266 }, 00:17:49.266 "discovery_filter": "match_any" 00:17:49.266 } 00:17:49.266 }, 00:17:49.266 { 00:17:49.266 "method": "nvmf_set_max_subsystems", 00:17:49.266 "params": { 00:17:49.266 "max_subsystems": 1024 00:17:49.266 } 00:17:49.266 }, 00:17:49.266 { 00:17:49.266 "method": "nvmf_set_crdt", 00:17:49.266 "params": { 00:17:49.266 "crdt1": 0, 00:17:49.266 "crdt2": 0, 00:17:49.266 "crdt3": 0 00:17:49.266 } 00:17:49.266 }, 00:17:49.266 { 00:17:49.266 "method": "nvmf_create_transport", 00:17:49.266 "params": { 00:17:49.266 "abort_timeout_sec": 1, 00:17:49.266 "buf_cache_size": 4294967295, 00:17:49.266 "c2h_success": false, 00:17:49.266 "dif_insert_or_strip": false, 00:17:49.266 "in_capsule_data_size": 4096, 00:17:49.266 "io_unit_size": 131072, 00:17:49.266 "max_aq_depth": 128, 00:17:49.266 "max_io_qpairs_per_ctrlr": 127, 00:17:49.266 "max_io_size": 131072, 00:17:49.266 "max_queue_depth": 128, 00:17:49.266 "num_shared_buffers": 511, 00:17:49.266 "sock_priority": 0, 00:17:49.266 "trtype": "TCP", 00:17:49.266 "zcopy": false 00:17:49.266 } 00:17:49.266 }, 00:17:49.266 { 00:17:49.266 "method": "nvmf_create_subsystem", 00:17:49.266 "params": { 00:17:49.266 "allow_any_host": false, 00:17:49.266 "ana_reporting": false, 00:17:49.266 "max_cntlid": 65519, 00:17:49.266 "max_namespaces": 10, 00:17:49.266 "min_cntlid": 1, 00:17:49.266 "model_number": "SPDK bdev Controller", 00:17:49.266 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:49.266 "serial_number": "SPDK00000000000001" 00:17:49.266 } 00:17:49.266 }, 00:17:49.266 { 00:17:49.266 "method": "nvmf_subsystem_add_host", 00:17:49.266 "params": { 00:17:49.266 "host": "nqn.2016-06.io.spdk:host1", 00:17:49.266 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:49.266 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:49.266 } 00:17:49.266 }, 00:17:49.266 { 00:17:49.266 "method": "nvmf_subsystem_add_ns", 00:17:49.266 "params": { 00:17:49.266 "namespace": { 00:17:49.266 "bdev_name": "malloc0", 00:17:49.266 "nguid": "9D161AFF76354084B9D7CA187B1B6AB2", 00:17:49.266 "nsid": 1, 00:17:49.266 "uuid": "9d161aff-7635-4084-b9d7-ca187b1b6ab2" 00:17:49.266 }, 00:17:49.266 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:49.266 } 00:17:49.267 }, 00:17:49.267 { 00:17:49.267 "method": "nvmf_subsystem_add_listener", 00:17:49.267 "params": { 00:17:49.267 "listen_address": { 00:17:49.267 "adrfam": "IPv4", 00:17:49.267 "traddr": "10.0.0.2", 00:17:49.267 "trsvcid": "4420", 00:17:49.267 "trtype": "TCP" 00:17:49.267 }, 00:17:49.267 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:49.267 "secure_channel": true 00:17:49.267 } 00:17:49.267 } 00:17:49.267 ] 00:17:49.267 } 00:17:49.267 ] 00:17:49.267 }' 00:17:49.267 19:38:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:49.267 19:38:36 -- common/autotest_common.sh@10 -- # set +x 00:17:49.526 19:38:36 -- nvmf/common.sh@469 -- # nvmfpid=89489 00:17:49.526 19:38:36 -- nvmf/common.sh@470 -- # waitforlisten 89489 00:17:49.526 19:38:36 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:49.526 19:38:36 -- common/autotest_common.sh@829 -- # '[' -z 89489 ']' 00:17:49.526 19:38:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.526 19:38:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:49.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.526 19:38:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.526 19:38:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:49.526 19:38:36 -- common/autotest_common.sh@10 -- # set +x 00:17:49.526 [2024-12-15 19:38:36.207036] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:49.526 [2024-12-15 19:38:36.207132] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:49.526 [2024-12-15 19:38:36.340695] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.526 [2024-12-15 19:38:36.400644] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:49.526 [2024-12-15 19:38:36.400798] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:49.526 [2024-12-15 19:38:36.400811] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:49.526 [2024-12-15 19:38:36.400841] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:49.526 [2024-12-15 19:38:36.400877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:49.786 [2024-12-15 19:38:36.647422] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:49.786 [2024-12-15 19:38:36.679387] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:49.786 [2024-12-15 19:38:36.679681] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:50.354 19:38:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:50.354 19:38:37 -- common/autotest_common.sh@862 -- # return 0 00:17:50.354 19:38:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:50.354 19:38:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:50.354 19:38:37 -- common/autotest_common.sh@10 -- # set +x 00:17:50.354 19:38:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:50.354 19:38:37 -- target/tls.sh@216 -- # bdevperf_pid=89529 00:17:50.355 19:38:37 -- target/tls.sh@217 -- # waitforlisten 89529 /var/tmp/bdevperf.sock 00:17:50.355 19:38:37 -- common/autotest_common.sh@829 -- # '[' -z 89529 ']' 00:17:50.355 19:38:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:50.355 19:38:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:50.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:50.355 19:38:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:50.355 19:38:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:50.355 19:38:37 -- common/autotest_common.sh@10 -- # set +x 00:17:50.355 19:38:37 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:50.355 19:38:37 -- target/tls.sh@213 -- # echo '{ 00:17:50.355 "subsystems": [ 00:17:50.355 { 00:17:50.355 "subsystem": "iobuf", 00:17:50.355 "config": [ 00:17:50.355 { 00:17:50.355 "method": "iobuf_set_options", 00:17:50.355 "params": { 00:17:50.355 "large_bufsize": 135168, 00:17:50.355 "large_pool_count": 1024, 00:17:50.355 "small_bufsize": 8192, 00:17:50.355 "small_pool_count": 8192 00:17:50.355 } 00:17:50.355 } 00:17:50.355 ] 00:17:50.355 }, 00:17:50.355 { 00:17:50.355 "subsystem": "sock", 00:17:50.355 "config": [ 00:17:50.355 { 00:17:50.355 "method": "sock_impl_set_options", 00:17:50.355 "params": { 00:17:50.355 "enable_ktls": false, 00:17:50.355 "enable_placement_id": 0, 00:17:50.355 "enable_quickack": false, 00:17:50.355 "enable_recv_pipe": true, 00:17:50.355 "enable_zerocopy_send_client": false, 00:17:50.355 "enable_zerocopy_send_server": true, 00:17:50.355 "impl_name": "posix", 00:17:50.355 "recv_buf_size": 2097152, 00:17:50.355 "send_buf_size": 2097152, 00:17:50.355 "tls_version": 0, 00:17:50.355 "zerocopy_threshold": 0 00:17:50.355 } 00:17:50.355 }, 00:17:50.355 { 00:17:50.355 "method": "sock_impl_set_options", 00:17:50.355 "params": { 00:17:50.355 "enable_ktls": false, 00:17:50.355 "enable_placement_id": 0, 00:17:50.355 "enable_quickack": false, 00:17:50.355 "enable_recv_pipe": true, 00:17:50.355 "enable_zerocopy_send_client": false, 00:17:50.355 "enable_zerocopy_send_server": true, 00:17:50.355 "impl_name": "ssl", 00:17:50.355 "recv_buf_size": 4096, 00:17:50.355 "send_buf_size": 4096, 00:17:50.355 "tls_version": 0, 00:17:50.355 "zerocopy_threshold": 0 00:17:50.355 } 00:17:50.355 } 00:17:50.355 ] 00:17:50.355 }, 00:17:50.355 { 00:17:50.355 "subsystem": "vmd", 00:17:50.355 "config": [] 00:17:50.355 }, 00:17:50.355 { 00:17:50.355 "subsystem": "accel", 00:17:50.355 "config": [ 00:17:50.355 { 00:17:50.355 "method": "accel_set_options", 00:17:50.355 "params": { 00:17:50.355 "buf_count": 2048, 00:17:50.355 "large_cache_size": 16, 00:17:50.355 "sequence_count": 2048, 00:17:50.355 "small_cache_size": 128, 00:17:50.355 "task_count": 2048 00:17:50.355 } 00:17:50.355 } 00:17:50.355 ] 00:17:50.355 }, 00:17:50.355 { 00:17:50.355 "subsystem": "bdev", 00:17:50.355 "config": [ 00:17:50.355 { 00:17:50.355 "method": "bdev_set_options", 00:17:50.355 "params": { 00:17:50.355 "bdev_auto_examine": true, 00:17:50.355 "bdev_io_cache_size": 256, 00:17:50.355 "bdev_io_pool_size": 65535, 00:17:50.355 "iobuf_large_cache_size": 16, 00:17:50.355 "iobuf_small_cache_size": 128 00:17:50.355 } 00:17:50.355 }, 00:17:50.355 { 00:17:50.355 "method": "bdev_raid_set_options", 00:17:50.355 "params": { 00:17:50.355 "process_window_size_kb": 1024 00:17:50.355 } 00:17:50.355 }, 00:17:50.355 { 00:17:50.355 "method": "bdev_iscsi_set_options", 00:17:50.355 "params": { 00:17:50.355 "timeout_sec": 30 00:17:50.355 } 00:17:50.355 }, 00:17:50.355 { 00:17:50.355 "method": "bdev_nvme_set_options", 00:17:50.355 "params": { 00:17:50.355 "action_on_timeout": "none", 00:17:50.355 "allow_accel_sequence": false, 00:17:50.355 "arbitration_burst": 0, 00:17:50.355 "bdev_retry_count": 3, 00:17:50.355 "ctrlr_loss_timeout_sec": 0, 00:17:50.355 "delay_cmd_submit": true, 00:17:50.355 "fast_io_fail_timeout_sec": 0, 00:17:50.355 "generate_uuids": false, 00:17:50.355 "high_priority_weight": 0, 00:17:50.355 "io_path_stat": false, 00:17:50.355 "io_queue_requests": 512, 00:17:50.355 "keep_alive_timeout_ms": 10000, 00:17:50.355 "low_priority_weight": 0, 00:17:50.355 "medium_priority_weight": 0, 00:17:50.355 "nvme_adminq_poll_period_us": 10000, 00:17:50.355 "nvme_ioq_poll_period_us": 0, 00:17:50.355 "reconnect_delay_sec": 0, 00:17:50.355 "timeout_admin_us": 0, 00:17:50.355 "timeout_us": 0, 00:17:50.355 "transport_ack_timeout": 0, 00:17:50.355 "transport_retry_count": 4, 00:17:50.355 "transport_tos": 0 00:17:50.355 } 00:17:50.355 }, 00:17:50.355 { 00:17:50.355 "method": "bdev_nvme_attach_controller", 00:17:50.355 "params": { 00:17:50.355 "adrfam": "IPv4", 00:17:50.355 "ctrlr_loss_timeout_sec": 0, 00:17:50.355 "ddgst": false, 00:17:50.355 "fast_io_fail_timeout_sec": 0, 00:17:50.355 "hdgst": false, 00:17:50.355 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:50.355 "name": "TLSTEST", 00:17:50.355 "prchk_guard": false, 00:17:50.355 "prchk_reftag": false, 00:17:50.355 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:17:50.355 "reconnect_delay_sec": 0, 00:17:50.355 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:50.355 "traddr": "10.0.0.2", 00:17:50.355 "trsvcid": "4420", 00:17:50.355 "trtype": "TCP" 00:17:50.355 } 00:17:50.355 }, 00:17:50.355 { 00:17:50.355 "method": "bdev_nvme_set_hotplug", 00:17:50.355 "params": { 00:17:50.355 "enable": false, 00:17:50.355 "period_us": 100000 00:17:50.355 } 00:17:50.355 }, 00:17:50.355 { 00:17:50.355 "method": "bdev_wait_for_examine" 00:17:50.355 } 00:17:50.355 ] 00:17:50.355 }, 00:17:50.355 { 00:17:50.355 "subsystem": "nbd", 00:17:50.355 "config": [] 00:17:50.355 } 00:17:50.355 ] 00:17:50.355 }' 00:17:50.615 [2024-12-15 19:38:37.258635] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:50.615 [2024-12-15 19:38:37.259355] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89529 ] 00:17:50.615 [2024-12-15 19:38:37.396703] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.615 [2024-12-15 19:38:37.467661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:50.874 [2024-12-15 19:38:37.645944] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:51.443 19:38:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:51.443 19:38:38 -- common/autotest_common.sh@862 -- # return 0 00:17:51.443 19:38:38 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:51.702 Running I/O for 10 seconds... 00:18:01.680 00:18:01.680 Latency(us) 00:18:01.680 [2024-12-15T19:38:48.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.680 [2024-12-15T19:38:48.576Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:01.680 Verification LBA range: start 0x0 length 0x2000 00:18:01.680 TLSTESTn1 : 10.01 7069.75 27.62 0.00 0.00 18078.64 4021.53 17754.30 00:18:01.680 [2024-12-15T19:38:48.576Z] =================================================================================================================== 00:18:01.680 [2024-12-15T19:38:48.576Z] Total : 7069.75 27.62 0.00 0.00 18078.64 4021.53 17754.30 00:18:01.680 0 00:18:01.680 19:38:48 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:01.680 19:38:48 -- target/tls.sh@223 -- # killprocess 89529 00:18:01.680 19:38:48 -- common/autotest_common.sh@936 -- # '[' -z 89529 ']' 00:18:01.680 19:38:48 -- common/autotest_common.sh@940 -- # kill -0 89529 00:18:01.680 19:38:48 -- common/autotest_common.sh@941 -- # uname 00:18:01.680 19:38:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:01.680 19:38:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89529 00:18:01.680 19:38:48 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:01.680 19:38:48 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:01.680 killing process with pid 89529 00:18:01.680 19:38:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89529' 00:18:01.680 Received shutdown signal, test time was about 10.000000 seconds 00:18:01.680 00:18:01.680 Latency(us) 00:18:01.680 [2024-12-15T19:38:48.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.680 [2024-12-15T19:38:48.576Z] =================================================================================================================== 00:18:01.680 [2024-12-15T19:38:48.576Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:01.680 19:38:48 -- common/autotest_common.sh@955 -- # kill 89529 00:18:01.680 19:38:48 -- common/autotest_common.sh@960 -- # wait 89529 00:18:01.940 19:38:48 -- target/tls.sh@224 -- # killprocess 89489 00:18:01.940 19:38:48 -- common/autotest_common.sh@936 -- # '[' -z 89489 ']' 00:18:01.940 19:38:48 -- common/autotest_common.sh@940 -- # kill -0 89489 00:18:01.940 19:38:48 -- common/autotest_common.sh@941 -- # uname 00:18:01.940 19:38:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:01.940 19:38:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89489 00:18:01.940 19:38:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:01.940 killing process with pid 89489 00:18:01.940 19:38:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:01.940 19:38:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89489' 00:18:01.940 19:38:48 -- common/autotest_common.sh@955 -- # kill 89489 00:18:01.940 19:38:48 -- common/autotest_common.sh@960 -- # wait 89489 00:18:02.200 19:38:49 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:18:02.200 19:38:49 -- target/tls.sh@227 -- # cleanup 00:18:02.200 19:38:49 -- target/tls.sh@15 -- # process_shm --id 0 00:18:02.200 19:38:49 -- common/autotest_common.sh@806 -- # type=--id 00:18:02.200 19:38:49 -- common/autotest_common.sh@807 -- # id=0 00:18:02.200 19:38:49 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:02.200 19:38:49 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:02.200 19:38:49 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:02.200 19:38:49 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:02.200 19:38:49 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:02.200 19:38:49 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:02.200 nvmf_trace.0 00:18:02.459 19:38:49 -- common/autotest_common.sh@821 -- # return 0 00:18:02.459 19:38:49 -- target/tls.sh@16 -- # killprocess 89529 00:18:02.459 19:38:49 -- common/autotest_common.sh@936 -- # '[' -z 89529 ']' 00:18:02.459 19:38:49 -- common/autotest_common.sh@940 -- # kill -0 89529 00:18:02.459 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (89529) - No such process 00:18:02.459 Process with pid 89529 is not found 00:18:02.459 19:38:49 -- common/autotest_common.sh@963 -- # echo 'Process with pid 89529 is not found' 00:18:02.459 19:38:49 -- target/tls.sh@17 -- # nvmftestfini 00:18:02.459 19:38:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:02.459 19:38:49 -- nvmf/common.sh@116 -- # sync 00:18:02.459 19:38:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:02.459 19:38:49 -- nvmf/common.sh@119 -- # set +e 00:18:02.459 19:38:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:02.459 19:38:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:02.459 rmmod nvme_tcp 00:18:02.459 rmmod nvme_fabrics 00:18:02.459 rmmod nvme_keyring 00:18:02.459 19:38:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:02.459 19:38:49 -- nvmf/common.sh@123 -- # set -e 00:18:02.459 19:38:49 -- nvmf/common.sh@124 -- # return 0 00:18:02.459 19:38:49 -- nvmf/common.sh@477 -- # '[' -n 89489 ']' 00:18:02.459 19:38:49 -- nvmf/common.sh@478 -- # killprocess 89489 00:18:02.459 19:38:49 -- common/autotest_common.sh@936 -- # '[' -z 89489 ']' 00:18:02.459 19:38:49 -- common/autotest_common.sh@940 -- # kill -0 89489 00:18:02.459 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (89489) - No such process 00:18:02.459 Process with pid 89489 is not found 00:18:02.459 19:38:49 -- common/autotest_common.sh@963 -- # echo 'Process with pid 89489 is not found' 00:18:02.459 19:38:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:02.459 19:38:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:02.459 19:38:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:02.459 19:38:49 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:02.459 19:38:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:02.459 19:38:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.459 19:38:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:02.459 19:38:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.459 19:38:49 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:02.459 19:38:49 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:02.459 ************************************ 00:18:02.459 END TEST nvmf_tls 00:18:02.459 ************************************ 00:18:02.459 00:18:02.459 real 1m12.624s 00:18:02.459 user 1m51.442s 00:18:02.459 sys 0m25.561s 00:18:02.459 19:38:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:02.459 19:38:49 -- common/autotest_common.sh@10 -- # set +x 00:18:02.459 19:38:49 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:02.459 19:38:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:02.459 19:38:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:02.459 19:38:49 -- common/autotest_common.sh@10 -- # set +x 00:18:02.459 ************************************ 00:18:02.459 START TEST nvmf_fips 00:18:02.459 ************************************ 00:18:02.459 19:38:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:02.719 * Looking for test storage... 00:18:02.719 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:18:02.719 19:38:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:02.719 19:38:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:02.719 19:38:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:02.719 19:38:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:02.719 19:38:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:02.719 19:38:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:02.719 19:38:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:02.719 19:38:49 -- scripts/common.sh@335 -- # IFS=.-: 00:18:02.719 19:38:49 -- scripts/common.sh@335 -- # read -ra ver1 00:18:02.719 19:38:49 -- scripts/common.sh@336 -- # IFS=.-: 00:18:02.719 19:38:49 -- scripts/common.sh@336 -- # read -ra ver2 00:18:02.719 19:38:49 -- scripts/common.sh@337 -- # local 'op=<' 00:18:02.719 19:38:49 -- scripts/common.sh@339 -- # ver1_l=2 00:18:02.719 19:38:49 -- scripts/common.sh@340 -- # ver2_l=1 00:18:02.719 19:38:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:02.719 19:38:49 -- scripts/common.sh@343 -- # case "$op" in 00:18:02.719 19:38:49 -- scripts/common.sh@344 -- # : 1 00:18:02.719 19:38:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:02.719 19:38:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:02.719 19:38:49 -- scripts/common.sh@364 -- # decimal 1 00:18:02.719 19:38:49 -- scripts/common.sh@352 -- # local d=1 00:18:02.719 19:38:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:02.719 19:38:49 -- scripts/common.sh@354 -- # echo 1 00:18:02.719 19:38:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:02.719 19:38:49 -- scripts/common.sh@365 -- # decimal 2 00:18:02.719 19:38:49 -- scripts/common.sh@352 -- # local d=2 00:18:02.719 19:38:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:02.719 19:38:49 -- scripts/common.sh@354 -- # echo 2 00:18:02.719 19:38:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:02.719 19:38:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:02.719 19:38:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:02.719 19:38:49 -- scripts/common.sh@367 -- # return 0 00:18:02.719 19:38:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:02.719 19:38:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:02.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.719 --rc genhtml_branch_coverage=1 00:18:02.719 --rc genhtml_function_coverage=1 00:18:02.719 --rc genhtml_legend=1 00:18:02.719 --rc geninfo_all_blocks=1 00:18:02.719 --rc geninfo_unexecuted_blocks=1 00:18:02.719 00:18:02.719 ' 00:18:02.719 19:38:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:02.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.719 --rc genhtml_branch_coverage=1 00:18:02.719 --rc genhtml_function_coverage=1 00:18:02.719 --rc genhtml_legend=1 00:18:02.719 --rc geninfo_all_blocks=1 00:18:02.719 --rc geninfo_unexecuted_blocks=1 00:18:02.719 00:18:02.719 ' 00:18:02.719 19:38:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:02.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.719 --rc genhtml_branch_coverage=1 00:18:02.719 --rc genhtml_function_coverage=1 00:18:02.719 --rc genhtml_legend=1 00:18:02.719 --rc geninfo_all_blocks=1 00:18:02.719 --rc geninfo_unexecuted_blocks=1 00:18:02.719 00:18:02.719 ' 00:18:02.719 19:38:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:02.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.719 --rc genhtml_branch_coverage=1 00:18:02.719 --rc genhtml_function_coverage=1 00:18:02.719 --rc genhtml_legend=1 00:18:02.719 --rc geninfo_all_blocks=1 00:18:02.719 --rc geninfo_unexecuted_blocks=1 00:18:02.719 00:18:02.719 ' 00:18:02.719 19:38:49 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:02.719 19:38:49 -- nvmf/common.sh@7 -- # uname -s 00:18:02.719 19:38:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:02.719 19:38:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:02.719 19:38:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:02.719 19:38:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:02.719 19:38:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:02.719 19:38:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:02.719 19:38:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:02.719 19:38:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:02.719 19:38:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:02.719 19:38:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:02.719 19:38:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:18:02.719 19:38:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:18:02.719 19:38:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:02.719 19:38:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:02.719 19:38:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:02.719 19:38:49 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:02.719 19:38:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:02.719 19:38:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:02.719 19:38:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:02.719 19:38:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.719 19:38:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.719 19:38:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.719 19:38:49 -- paths/export.sh@5 -- # export PATH 00:18:02.719 19:38:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.719 19:38:49 -- nvmf/common.sh@46 -- # : 0 00:18:02.719 19:38:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:02.719 19:38:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:02.719 19:38:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:02.719 19:38:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:02.719 19:38:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:02.719 19:38:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:02.719 19:38:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:02.719 19:38:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:02.719 19:38:49 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:02.719 19:38:49 -- fips/fips.sh@89 -- # check_openssl_version 00:18:02.719 19:38:49 -- fips/fips.sh@83 -- # local target=3.0.0 00:18:02.719 19:38:49 -- fips/fips.sh@85 -- # openssl version 00:18:02.719 19:38:49 -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:02.719 19:38:49 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:18:02.719 19:38:49 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:18:02.719 19:38:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:02.719 19:38:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:02.719 19:38:49 -- scripts/common.sh@335 -- # IFS=.-: 00:18:02.720 19:38:49 -- scripts/common.sh@335 -- # read -ra ver1 00:18:02.720 19:38:49 -- scripts/common.sh@336 -- # IFS=.-: 00:18:02.720 19:38:49 -- scripts/common.sh@336 -- # read -ra ver2 00:18:02.720 19:38:49 -- scripts/common.sh@337 -- # local 'op=>=' 00:18:02.720 19:38:49 -- scripts/common.sh@339 -- # ver1_l=3 00:18:02.720 19:38:49 -- scripts/common.sh@340 -- # ver2_l=3 00:18:02.720 19:38:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:02.720 19:38:49 -- scripts/common.sh@343 -- # case "$op" in 00:18:02.720 19:38:49 -- scripts/common.sh@347 -- # : 1 00:18:02.720 19:38:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:02.720 19:38:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:02.720 19:38:49 -- scripts/common.sh@364 -- # decimal 3 00:18:02.720 19:38:49 -- scripts/common.sh@352 -- # local d=3 00:18:02.720 19:38:49 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:02.720 19:38:49 -- scripts/common.sh@354 -- # echo 3 00:18:02.720 19:38:49 -- scripts/common.sh@364 -- # ver1[v]=3 00:18:02.720 19:38:49 -- scripts/common.sh@365 -- # decimal 3 00:18:02.720 19:38:49 -- scripts/common.sh@352 -- # local d=3 00:18:02.720 19:38:49 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:02.720 19:38:49 -- scripts/common.sh@354 -- # echo 3 00:18:02.720 19:38:49 -- scripts/common.sh@365 -- # ver2[v]=3 00:18:02.720 19:38:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:02.720 19:38:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:02.720 19:38:49 -- scripts/common.sh@363 -- # (( v++ )) 00:18:02.720 19:38:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:02.720 19:38:49 -- scripts/common.sh@364 -- # decimal 1 00:18:02.720 19:38:49 -- scripts/common.sh@352 -- # local d=1 00:18:02.720 19:38:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:02.720 19:38:49 -- scripts/common.sh@354 -- # echo 1 00:18:02.720 19:38:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:02.720 19:38:49 -- scripts/common.sh@365 -- # decimal 0 00:18:02.720 19:38:49 -- scripts/common.sh@352 -- # local d=0 00:18:02.720 19:38:49 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:02.720 19:38:49 -- scripts/common.sh@354 -- # echo 0 00:18:02.720 19:38:49 -- scripts/common.sh@365 -- # ver2[v]=0 00:18:02.720 19:38:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:02.720 19:38:49 -- scripts/common.sh@366 -- # return 0 00:18:02.720 19:38:49 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:02.720 19:38:49 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:02.720 19:38:49 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:02.720 19:38:49 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:02.720 19:38:49 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:02.720 19:38:49 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:02.720 19:38:49 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:02.720 19:38:49 -- fips/fips.sh@113 -- # build_openssl_config 00:18:02.720 19:38:49 -- fips/fips.sh@37 -- # cat 00:18:02.720 19:38:49 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:02.720 19:38:49 -- fips/fips.sh@58 -- # cat - 00:18:02.720 19:38:49 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:02.720 19:38:49 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:02.720 19:38:49 -- fips/fips.sh@116 -- # mapfile -t providers 00:18:02.720 19:38:49 -- fips/fips.sh@116 -- # openssl list -providers 00:18:02.720 19:38:49 -- fips/fips.sh@116 -- # grep name 00:18:02.980 19:38:49 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:02.980 19:38:49 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:02.980 19:38:49 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:02.980 19:38:49 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:02.980 19:38:49 -- fips/fips.sh@127 -- # : 00:18:02.980 19:38:49 -- common/autotest_common.sh@650 -- # local es=0 00:18:02.980 19:38:49 -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:02.981 19:38:49 -- common/autotest_common.sh@638 -- # local arg=openssl 00:18:02.981 19:38:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:02.981 19:38:49 -- common/autotest_common.sh@642 -- # type -t openssl 00:18:02.981 19:38:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:02.981 19:38:49 -- common/autotest_common.sh@644 -- # type -P openssl 00:18:02.981 19:38:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:02.981 19:38:49 -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:18:02.981 19:38:49 -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:18:02.981 19:38:49 -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:18:02.981 Error setting digest 00:18:02.981 408267B2957F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:18:02.981 408267B2957F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:18:02.981 19:38:49 -- common/autotest_common.sh@653 -- # es=1 00:18:02.981 19:38:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:02.981 19:38:49 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:02.981 19:38:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:02.981 19:38:49 -- fips/fips.sh@130 -- # nvmftestinit 00:18:02.981 19:38:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:02.981 19:38:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:02.981 19:38:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:02.981 19:38:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:02.981 19:38:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:02.981 19:38:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.981 19:38:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:02.981 19:38:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.981 19:38:49 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:02.981 19:38:49 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:02.981 19:38:49 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:02.981 19:38:49 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:02.981 19:38:49 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:02.981 19:38:49 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:02.981 19:38:49 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:02.981 19:38:49 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:02.981 19:38:49 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:02.981 19:38:49 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:02.981 19:38:49 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:02.981 19:38:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:02.981 19:38:49 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:02.981 19:38:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:02.981 19:38:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:02.981 19:38:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:02.981 19:38:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:02.981 19:38:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:02.981 19:38:49 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:02.981 19:38:49 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:02.981 Cannot find device "nvmf_tgt_br" 00:18:02.981 19:38:49 -- nvmf/common.sh@154 -- # true 00:18:02.981 19:38:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:02.981 Cannot find device "nvmf_tgt_br2" 00:18:02.981 19:38:49 -- nvmf/common.sh@155 -- # true 00:18:02.981 19:38:49 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:02.981 19:38:49 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:02.981 Cannot find device "nvmf_tgt_br" 00:18:02.981 19:38:49 -- nvmf/common.sh@157 -- # true 00:18:02.981 19:38:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:02.981 Cannot find device "nvmf_tgt_br2" 00:18:02.981 19:38:49 -- nvmf/common.sh@158 -- # true 00:18:02.981 19:38:49 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:02.981 19:38:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:02.981 19:38:49 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:02.981 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:02.981 19:38:49 -- nvmf/common.sh@161 -- # true 00:18:02.981 19:38:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:02.981 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:02.981 19:38:49 -- nvmf/common.sh@162 -- # true 00:18:02.981 19:38:49 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:02.981 19:38:49 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:02.981 19:38:49 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:02.981 19:38:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:02.981 19:38:49 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:03.241 19:38:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:03.241 19:38:49 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:03.241 19:38:49 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:03.241 19:38:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:03.241 19:38:49 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:03.241 19:38:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:03.241 19:38:49 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:03.241 19:38:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:03.241 19:38:49 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:03.241 19:38:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:03.241 19:38:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:03.241 19:38:49 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:03.241 19:38:49 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:03.241 19:38:49 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:03.241 19:38:50 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:03.241 19:38:50 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:03.241 19:38:50 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:03.241 19:38:50 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:03.241 19:38:50 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:03.241 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:03.241 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:18:03.241 00:18:03.241 --- 10.0.0.2 ping statistics --- 00:18:03.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.241 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:18:03.241 19:38:50 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:03.241 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:03.241 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:18:03.241 00:18:03.241 --- 10.0.0.3 ping statistics --- 00:18:03.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.241 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:18:03.241 19:38:50 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:03.241 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:03.241 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:18:03.241 00:18:03.241 --- 10.0.0.1 ping statistics --- 00:18:03.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.241 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:18:03.241 19:38:50 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:03.241 19:38:50 -- nvmf/common.sh@421 -- # return 0 00:18:03.241 19:38:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:03.241 19:38:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:03.241 19:38:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:03.241 19:38:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:03.241 19:38:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:03.241 19:38:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:03.241 19:38:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:03.241 19:38:50 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:03.241 19:38:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:03.241 19:38:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:03.241 19:38:50 -- common/autotest_common.sh@10 -- # set +x 00:18:03.241 19:38:50 -- nvmf/common.sh@469 -- # nvmfpid=89894 00:18:03.241 19:38:50 -- nvmf/common.sh@470 -- # waitforlisten 89894 00:18:03.241 19:38:50 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:03.241 19:38:50 -- common/autotest_common.sh@829 -- # '[' -z 89894 ']' 00:18:03.241 19:38:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.241 19:38:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:03.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.241 19:38:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.241 19:38:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:03.241 19:38:50 -- common/autotest_common.sh@10 -- # set +x 00:18:03.500 [2024-12-15 19:38:50.169872] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:03.500 [2024-12-15 19:38:50.169953] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:03.500 [2024-12-15 19:38:50.311082] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.759 [2024-12-15 19:38:50.403563] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:03.759 [2024-12-15 19:38:50.403747] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:03.759 [2024-12-15 19:38:50.403763] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:03.759 [2024-12-15 19:38:50.403776] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:03.759 [2024-12-15 19:38:50.403847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:04.329 19:38:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:04.329 19:38:51 -- common/autotest_common.sh@862 -- # return 0 00:18:04.329 19:38:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:04.329 19:38:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:04.329 19:38:51 -- common/autotest_common.sh@10 -- # set +x 00:18:04.329 19:38:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:04.329 19:38:51 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:04.329 19:38:51 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:04.329 19:38:51 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:04.329 19:38:51 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:04.329 19:38:51 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:04.329 19:38:51 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:04.329 19:38:51 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:04.329 19:38:51 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:04.589 [2024-12-15 19:38:51.475506] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:04.848 [2024-12-15 19:38:51.491464] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:04.848 [2024-12-15 19:38:51.491665] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:04.848 malloc0 00:18:04.848 19:38:51 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:04.848 19:38:51 -- fips/fips.sh@147 -- # bdevperf_pid=89957 00:18:04.848 19:38:51 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:04.848 19:38:51 -- fips/fips.sh@148 -- # waitforlisten 89957 /var/tmp/bdevperf.sock 00:18:04.848 19:38:51 -- common/autotest_common.sh@829 -- # '[' -z 89957 ']' 00:18:04.848 19:38:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:04.848 19:38:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:04.848 19:38:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:04.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:04.848 19:38:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:04.848 19:38:51 -- common/autotest_common.sh@10 -- # set +x 00:18:04.848 [2024-12-15 19:38:51.632941] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:04.848 [2024-12-15 19:38:51.633027] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89957 ] 00:18:05.107 [2024-12-15 19:38:51.766495] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.107 [2024-12-15 19:38:51.853288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:06.044 19:38:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:06.045 19:38:52 -- common/autotest_common.sh@862 -- # return 0 00:18:06.045 19:38:52 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:06.045 [2024-12-15 19:38:52.823946] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:06.045 TLSTESTn1 00:18:06.045 19:38:52 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:06.303 Running I/O for 10 seconds... 00:18:16.285 00:18:16.285 Latency(us) 00:18:16.285 [2024-12-15T19:39:03.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.285 [2024-12-15T19:39:03.181Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:16.285 Verification LBA range: start 0x0 length 0x2000 00:18:16.285 TLSTESTn1 : 10.02 6393.82 24.98 0.00 0.00 19984.15 6702.55 20256.58 00:18:16.285 [2024-12-15T19:39:03.181Z] =================================================================================================================== 00:18:16.285 [2024-12-15T19:39:03.181Z] Total : 6393.82 24.98 0.00 0.00 19984.15 6702.55 20256.58 00:18:16.285 0 00:18:16.285 19:39:03 -- fips/fips.sh@1 -- # cleanup 00:18:16.285 19:39:03 -- fips/fips.sh@15 -- # process_shm --id 0 00:18:16.285 19:39:03 -- common/autotest_common.sh@806 -- # type=--id 00:18:16.285 19:39:03 -- common/autotest_common.sh@807 -- # id=0 00:18:16.285 19:39:03 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:16.285 19:39:03 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:16.285 19:39:03 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:16.285 19:39:03 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:16.285 19:39:03 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:16.285 19:39:03 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:16.285 nvmf_trace.0 00:18:16.285 19:39:03 -- common/autotest_common.sh@821 -- # return 0 00:18:16.285 19:39:03 -- fips/fips.sh@16 -- # killprocess 89957 00:18:16.285 19:39:03 -- common/autotest_common.sh@936 -- # '[' -z 89957 ']' 00:18:16.285 19:39:03 -- common/autotest_common.sh@940 -- # kill -0 89957 00:18:16.285 19:39:03 -- common/autotest_common.sh@941 -- # uname 00:18:16.285 19:39:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:16.285 19:39:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89957 00:18:16.285 19:39:03 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:16.285 killing process with pid 89957 00:18:16.285 19:39:03 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:16.285 19:39:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89957' 00:18:16.285 Received shutdown signal, test time was about 10.000000 seconds 00:18:16.285 00:18:16.285 Latency(us) 00:18:16.285 [2024-12-15T19:39:03.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.285 [2024-12-15T19:39:03.181Z] =================================================================================================================== 00:18:16.285 [2024-12-15T19:39:03.181Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:16.285 19:39:03 -- common/autotest_common.sh@955 -- # kill 89957 00:18:16.285 19:39:03 -- common/autotest_common.sh@960 -- # wait 89957 00:18:16.545 19:39:03 -- fips/fips.sh@17 -- # nvmftestfini 00:18:16.545 19:39:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:16.545 19:39:03 -- nvmf/common.sh@116 -- # sync 00:18:16.804 19:39:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:16.804 19:39:03 -- nvmf/common.sh@119 -- # set +e 00:18:16.804 19:39:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:16.804 19:39:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:16.804 rmmod nvme_tcp 00:18:16.804 rmmod nvme_fabrics 00:18:16.804 rmmod nvme_keyring 00:18:16.804 19:39:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:16.804 19:39:03 -- nvmf/common.sh@123 -- # set -e 00:18:16.804 19:39:03 -- nvmf/common.sh@124 -- # return 0 00:18:16.804 19:39:03 -- nvmf/common.sh@477 -- # '[' -n 89894 ']' 00:18:16.804 19:39:03 -- nvmf/common.sh@478 -- # killprocess 89894 00:18:16.804 19:39:03 -- common/autotest_common.sh@936 -- # '[' -z 89894 ']' 00:18:16.804 19:39:03 -- common/autotest_common.sh@940 -- # kill -0 89894 00:18:16.804 19:39:03 -- common/autotest_common.sh@941 -- # uname 00:18:16.804 19:39:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:16.804 19:39:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89894 00:18:16.804 19:39:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:16.804 19:39:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:16.804 killing process with pid 89894 00:18:16.804 19:39:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89894' 00:18:16.804 19:39:03 -- common/autotest_common.sh@955 -- # kill 89894 00:18:16.804 19:39:03 -- common/autotest_common.sh@960 -- # wait 89894 00:18:17.064 19:39:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:17.064 19:39:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:17.064 19:39:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:17.064 19:39:03 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:17.064 19:39:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:17.064 19:39:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.064 19:39:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:17.064 19:39:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:17.064 19:39:03 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:17.064 19:39:03 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:17.064 00:18:17.064 real 0m14.594s 00:18:17.064 user 0m19.398s 00:18:17.064 sys 0m6.093s 00:18:17.064 19:39:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:17.064 ************************************ 00:18:17.064 END TEST nvmf_fips 00:18:17.064 ************************************ 00:18:17.065 19:39:03 -- common/autotest_common.sh@10 -- # set +x 00:18:17.065 19:39:03 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:18:17.065 19:39:03 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:17.065 19:39:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:17.065 19:39:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:17.065 19:39:03 -- common/autotest_common.sh@10 -- # set +x 00:18:17.325 ************************************ 00:18:17.325 START TEST nvmf_fuzz 00:18:17.325 ************************************ 00:18:17.325 19:39:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:17.325 * Looking for test storage... 00:18:17.325 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:17.325 19:39:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:17.325 19:39:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:17.325 19:39:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:17.325 19:39:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:17.325 19:39:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:17.325 19:39:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:17.325 19:39:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:17.325 19:39:04 -- scripts/common.sh@335 -- # IFS=.-: 00:18:17.325 19:39:04 -- scripts/common.sh@335 -- # read -ra ver1 00:18:17.325 19:39:04 -- scripts/common.sh@336 -- # IFS=.-: 00:18:17.325 19:39:04 -- scripts/common.sh@336 -- # read -ra ver2 00:18:17.325 19:39:04 -- scripts/common.sh@337 -- # local 'op=<' 00:18:17.325 19:39:04 -- scripts/common.sh@339 -- # ver1_l=2 00:18:17.325 19:39:04 -- scripts/common.sh@340 -- # ver2_l=1 00:18:17.325 19:39:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:17.325 19:39:04 -- scripts/common.sh@343 -- # case "$op" in 00:18:17.325 19:39:04 -- scripts/common.sh@344 -- # : 1 00:18:17.325 19:39:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:17.325 19:39:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:17.325 19:39:04 -- scripts/common.sh@364 -- # decimal 1 00:18:17.325 19:39:04 -- scripts/common.sh@352 -- # local d=1 00:18:17.325 19:39:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:17.325 19:39:04 -- scripts/common.sh@354 -- # echo 1 00:18:17.325 19:39:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:17.325 19:39:04 -- scripts/common.sh@365 -- # decimal 2 00:18:17.325 19:39:04 -- scripts/common.sh@352 -- # local d=2 00:18:17.325 19:39:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:17.325 19:39:04 -- scripts/common.sh@354 -- # echo 2 00:18:17.325 19:39:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:17.325 19:39:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:17.325 19:39:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:17.325 19:39:04 -- scripts/common.sh@367 -- # return 0 00:18:17.325 19:39:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:17.325 19:39:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:17.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:17.325 --rc genhtml_branch_coverage=1 00:18:17.325 --rc genhtml_function_coverage=1 00:18:17.325 --rc genhtml_legend=1 00:18:17.325 --rc geninfo_all_blocks=1 00:18:17.325 --rc geninfo_unexecuted_blocks=1 00:18:17.325 00:18:17.325 ' 00:18:17.325 19:39:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:17.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:17.325 --rc genhtml_branch_coverage=1 00:18:17.325 --rc genhtml_function_coverage=1 00:18:17.325 --rc genhtml_legend=1 00:18:17.325 --rc geninfo_all_blocks=1 00:18:17.325 --rc geninfo_unexecuted_blocks=1 00:18:17.325 00:18:17.325 ' 00:18:17.325 19:39:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:17.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:17.325 --rc genhtml_branch_coverage=1 00:18:17.325 --rc genhtml_function_coverage=1 00:18:17.325 --rc genhtml_legend=1 00:18:17.325 --rc geninfo_all_blocks=1 00:18:17.325 --rc geninfo_unexecuted_blocks=1 00:18:17.325 00:18:17.325 ' 00:18:17.325 19:39:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:17.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:17.325 --rc genhtml_branch_coverage=1 00:18:17.325 --rc genhtml_function_coverage=1 00:18:17.325 --rc genhtml_legend=1 00:18:17.325 --rc geninfo_all_blocks=1 00:18:17.325 --rc geninfo_unexecuted_blocks=1 00:18:17.325 00:18:17.325 ' 00:18:17.325 19:39:04 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:17.325 19:39:04 -- nvmf/common.sh@7 -- # uname -s 00:18:17.325 19:39:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:17.325 19:39:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:17.325 19:39:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:17.325 19:39:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:17.325 19:39:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:17.325 19:39:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:17.325 19:39:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:17.325 19:39:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:17.325 19:39:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:17.325 19:39:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:17.325 19:39:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:18:17.325 19:39:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:18:17.325 19:39:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:17.325 19:39:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:17.325 19:39:04 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:17.325 19:39:04 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:17.325 19:39:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:17.325 19:39:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:17.325 19:39:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:17.325 19:39:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.326 19:39:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.326 19:39:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.326 19:39:04 -- paths/export.sh@5 -- # export PATH 00:18:17.326 19:39:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.326 19:39:04 -- nvmf/common.sh@46 -- # : 0 00:18:17.326 19:39:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:17.326 19:39:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:17.326 19:39:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:17.326 19:39:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:17.326 19:39:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:17.326 19:39:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:17.326 19:39:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:17.326 19:39:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:17.326 19:39:04 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:18:17.326 19:39:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:17.326 19:39:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:17.326 19:39:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:17.326 19:39:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:17.326 19:39:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:17.326 19:39:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.326 19:39:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:17.326 19:39:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:17.326 19:39:04 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:17.326 19:39:04 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:17.326 19:39:04 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:17.326 19:39:04 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:17.326 19:39:04 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:17.326 19:39:04 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:17.326 19:39:04 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:17.326 19:39:04 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:17.326 19:39:04 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:17.326 19:39:04 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:17.326 19:39:04 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:17.326 19:39:04 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:17.326 19:39:04 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:17.326 19:39:04 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:17.326 19:39:04 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:17.326 19:39:04 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:17.326 19:39:04 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:17.326 19:39:04 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:17.326 19:39:04 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:17.326 19:39:04 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:17.326 Cannot find device "nvmf_tgt_br" 00:18:17.326 19:39:04 -- nvmf/common.sh@154 -- # true 00:18:17.326 19:39:04 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:17.326 Cannot find device "nvmf_tgt_br2" 00:18:17.326 19:39:04 -- nvmf/common.sh@155 -- # true 00:18:17.326 19:39:04 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:17.585 19:39:04 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:17.585 Cannot find device "nvmf_tgt_br" 00:18:17.585 19:39:04 -- nvmf/common.sh@157 -- # true 00:18:17.585 19:39:04 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:17.585 Cannot find device "nvmf_tgt_br2" 00:18:17.585 19:39:04 -- nvmf/common.sh@158 -- # true 00:18:17.585 19:39:04 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:17.585 19:39:04 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:17.585 19:39:04 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:17.585 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:17.585 19:39:04 -- nvmf/common.sh@161 -- # true 00:18:17.585 19:39:04 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:17.585 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:17.585 19:39:04 -- nvmf/common.sh@162 -- # true 00:18:17.585 19:39:04 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:17.585 19:39:04 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:17.585 19:39:04 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:17.585 19:39:04 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:17.585 19:39:04 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:17.585 19:39:04 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:17.585 19:39:04 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:17.585 19:39:04 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:17.585 19:39:04 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:17.585 19:39:04 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:17.585 19:39:04 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:17.585 19:39:04 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:17.585 19:39:04 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:17.585 19:39:04 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:17.585 19:39:04 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:17.586 19:39:04 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:17.586 19:39:04 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:17.586 19:39:04 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:17.586 19:39:04 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:17.586 19:39:04 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:17.845 19:39:04 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:17.845 19:39:04 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:17.845 19:39:04 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:17.845 19:39:04 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:17.845 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:17.845 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:18:17.845 00:18:17.845 --- 10.0.0.2 ping statistics --- 00:18:17.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.845 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:18:17.845 19:39:04 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:17.845 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:17.845 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:18:17.845 00:18:17.845 --- 10.0.0.3 ping statistics --- 00:18:17.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.845 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:18:17.845 19:39:04 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:17.845 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:17.845 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:18:17.845 00:18:17.845 --- 10.0.0.1 ping statistics --- 00:18:17.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.845 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:18:17.845 19:39:04 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:17.845 19:39:04 -- nvmf/common.sh@421 -- # return 0 00:18:17.845 19:39:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:17.845 19:39:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:17.845 19:39:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:17.845 19:39:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:17.845 19:39:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:17.845 19:39:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:17.845 19:39:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:17.845 19:39:04 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=90304 00:18:17.845 19:39:04 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:17.845 19:39:04 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:17.845 19:39:04 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 90304 00:18:17.845 19:39:04 -- common/autotest_common.sh@829 -- # '[' -z 90304 ']' 00:18:17.845 19:39:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.845 19:39:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:17.845 19:39:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.845 19:39:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:17.845 19:39:04 -- common/autotest_common.sh@10 -- # set +x 00:18:18.786 19:39:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:18.786 19:39:05 -- common/autotest_common.sh@862 -- # return 0 00:18:18.786 19:39:05 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:18.786 19:39:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.786 19:39:05 -- common/autotest_common.sh@10 -- # set +x 00:18:18.786 19:39:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.786 19:39:05 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:18:18.786 19:39:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.786 19:39:05 -- common/autotest_common.sh@10 -- # set +x 00:18:19.046 Malloc0 00:18:19.046 19:39:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.046 19:39:05 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:19.046 19:39:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.046 19:39:05 -- common/autotest_common.sh@10 -- # set +x 00:18:19.046 19:39:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.046 19:39:05 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:19.046 19:39:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.046 19:39:05 -- common/autotest_common.sh@10 -- # set +x 00:18:19.046 19:39:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.046 19:39:05 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:19.046 19:39:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.046 19:39:05 -- common/autotest_common.sh@10 -- # set +x 00:18:19.046 19:39:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.046 19:39:05 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:18:19.046 19:39:05 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:18:19.304 Shutting down the fuzz application 00:18:19.305 19:39:06 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:18:19.564 Shutting down the fuzz application 00:18:19.564 19:39:06 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:19.564 19:39:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.564 19:39:06 -- common/autotest_common.sh@10 -- # set +x 00:18:19.564 19:39:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.564 19:39:06 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:18:19.564 19:39:06 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:18:19.564 19:39:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:19.564 19:39:06 -- nvmf/common.sh@116 -- # sync 00:18:19.564 19:39:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:19.564 19:39:06 -- nvmf/common.sh@119 -- # set +e 00:18:19.564 19:39:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:19.564 19:39:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:19.564 rmmod nvme_tcp 00:18:19.564 rmmod nvme_fabrics 00:18:19.823 rmmod nvme_keyring 00:18:19.823 19:39:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:19.823 19:39:06 -- nvmf/common.sh@123 -- # set -e 00:18:19.823 19:39:06 -- nvmf/common.sh@124 -- # return 0 00:18:19.823 19:39:06 -- nvmf/common.sh@477 -- # '[' -n 90304 ']' 00:18:19.823 19:39:06 -- nvmf/common.sh@478 -- # killprocess 90304 00:18:19.823 19:39:06 -- common/autotest_common.sh@936 -- # '[' -z 90304 ']' 00:18:19.823 19:39:06 -- common/autotest_common.sh@940 -- # kill -0 90304 00:18:19.823 19:39:06 -- common/autotest_common.sh@941 -- # uname 00:18:19.823 19:39:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:19.823 19:39:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90304 00:18:19.823 19:39:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:19.823 19:39:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:19.823 killing process with pid 90304 00:18:19.823 19:39:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90304' 00:18:19.823 19:39:06 -- common/autotest_common.sh@955 -- # kill 90304 00:18:19.823 19:39:06 -- common/autotest_common.sh@960 -- # wait 90304 00:18:20.082 19:39:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:20.082 19:39:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:20.082 19:39:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:20.082 19:39:06 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:20.082 19:39:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:20.082 19:39:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:20.082 19:39:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:20.082 19:39:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:20.082 19:39:06 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:20.082 19:39:06 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:18:20.082 00:18:20.082 real 0m2.914s 00:18:20.082 user 0m2.927s 00:18:20.082 sys 0m0.743s 00:18:20.082 19:39:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:20.082 19:39:06 -- common/autotest_common.sh@10 -- # set +x 00:18:20.082 ************************************ 00:18:20.082 END TEST nvmf_fuzz 00:18:20.082 ************************************ 00:18:20.082 19:39:06 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:20.082 19:39:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:20.082 19:39:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:20.082 19:39:06 -- common/autotest_common.sh@10 -- # set +x 00:18:20.082 ************************************ 00:18:20.082 START TEST nvmf_multiconnection 00:18:20.082 ************************************ 00:18:20.082 19:39:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:20.344 * Looking for test storage... 00:18:20.344 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:20.344 19:39:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:20.344 19:39:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:20.344 19:39:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:20.344 19:39:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:20.344 19:39:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:20.344 19:39:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:20.344 19:39:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:20.344 19:39:07 -- scripts/common.sh@335 -- # IFS=.-: 00:18:20.344 19:39:07 -- scripts/common.sh@335 -- # read -ra ver1 00:18:20.344 19:39:07 -- scripts/common.sh@336 -- # IFS=.-: 00:18:20.344 19:39:07 -- scripts/common.sh@336 -- # read -ra ver2 00:18:20.344 19:39:07 -- scripts/common.sh@337 -- # local 'op=<' 00:18:20.344 19:39:07 -- scripts/common.sh@339 -- # ver1_l=2 00:18:20.344 19:39:07 -- scripts/common.sh@340 -- # ver2_l=1 00:18:20.344 19:39:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:20.344 19:39:07 -- scripts/common.sh@343 -- # case "$op" in 00:18:20.344 19:39:07 -- scripts/common.sh@344 -- # : 1 00:18:20.344 19:39:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:20.344 19:39:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:20.344 19:39:07 -- scripts/common.sh@364 -- # decimal 1 00:18:20.344 19:39:07 -- scripts/common.sh@352 -- # local d=1 00:18:20.344 19:39:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:20.344 19:39:07 -- scripts/common.sh@354 -- # echo 1 00:18:20.344 19:39:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:20.344 19:39:07 -- scripts/common.sh@365 -- # decimal 2 00:18:20.344 19:39:07 -- scripts/common.sh@352 -- # local d=2 00:18:20.344 19:39:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:20.345 19:39:07 -- scripts/common.sh@354 -- # echo 2 00:18:20.345 19:39:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:20.345 19:39:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:20.345 19:39:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:20.345 19:39:07 -- scripts/common.sh@367 -- # return 0 00:18:20.345 19:39:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:20.345 19:39:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:20.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.345 --rc genhtml_branch_coverage=1 00:18:20.345 --rc genhtml_function_coverage=1 00:18:20.345 --rc genhtml_legend=1 00:18:20.345 --rc geninfo_all_blocks=1 00:18:20.345 --rc geninfo_unexecuted_blocks=1 00:18:20.345 00:18:20.345 ' 00:18:20.345 19:39:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:20.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.345 --rc genhtml_branch_coverage=1 00:18:20.345 --rc genhtml_function_coverage=1 00:18:20.345 --rc genhtml_legend=1 00:18:20.345 --rc geninfo_all_blocks=1 00:18:20.345 --rc geninfo_unexecuted_blocks=1 00:18:20.345 00:18:20.345 ' 00:18:20.345 19:39:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:20.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.345 --rc genhtml_branch_coverage=1 00:18:20.345 --rc genhtml_function_coverage=1 00:18:20.345 --rc genhtml_legend=1 00:18:20.345 --rc geninfo_all_blocks=1 00:18:20.345 --rc geninfo_unexecuted_blocks=1 00:18:20.345 00:18:20.345 ' 00:18:20.345 19:39:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:20.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.345 --rc genhtml_branch_coverage=1 00:18:20.345 --rc genhtml_function_coverage=1 00:18:20.345 --rc genhtml_legend=1 00:18:20.345 --rc geninfo_all_blocks=1 00:18:20.345 --rc geninfo_unexecuted_blocks=1 00:18:20.345 00:18:20.345 ' 00:18:20.345 19:39:07 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:20.345 19:39:07 -- nvmf/common.sh@7 -- # uname -s 00:18:20.345 19:39:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:20.345 19:39:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:20.345 19:39:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:20.345 19:39:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:20.345 19:39:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:20.345 19:39:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:20.345 19:39:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:20.345 19:39:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:20.345 19:39:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:20.345 19:39:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:20.345 19:39:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:18:20.345 19:39:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:18:20.345 19:39:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:20.345 19:39:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:20.345 19:39:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:20.345 19:39:07 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:20.345 19:39:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:20.345 19:39:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:20.345 19:39:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:20.345 19:39:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.345 19:39:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.345 19:39:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.345 19:39:07 -- paths/export.sh@5 -- # export PATH 00:18:20.345 19:39:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.345 19:39:07 -- nvmf/common.sh@46 -- # : 0 00:18:20.345 19:39:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:20.345 19:39:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:20.345 19:39:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:20.345 19:39:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:20.345 19:39:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:20.345 19:39:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:20.345 19:39:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:20.345 19:39:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:20.345 19:39:07 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:20.345 19:39:07 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:20.345 19:39:07 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:18:20.345 19:39:07 -- target/multiconnection.sh@16 -- # nvmftestinit 00:18:20.345 19:39:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:20.345 19:39:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:20.345 19:39:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:20.345 19:39:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:20.345 19:39:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:20.345 19:39:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:20.345 19:39:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:20.345 19:39:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:20.345 19:39:07 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:20.345 19:39:07 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:20.345 19:39:07 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:20.345 19:39:07 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:20.345 19:39:07 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:20.345 19:39:07 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:20.345 19:39:07 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:20.345 19:39:07 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:20.345 19:39:07 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:20.345 19:39:07 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:20.345 19:39:07 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:20.345 19:39:07 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:20.345 19:39:07 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:20.345 19:39:07 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:20.345 19:39:07 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:20.345 19:39:07 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:20.345 19:39:07 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:20.345 19:39:07 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:20.345 19:39:07 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:20.345 19:39:07 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:20.345 Cannot find device "nvmf_tgt_br" 00:18:20.345 19:39:07 -- nvmf/common.sh@154 -- # true 00:18:20.345 19:39:07 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:20.345 Cannot find device "nvmf_tgt_br2" 00:18:20.345 19:39:07 -- nvmf/common.sh@155 -- # true 00:18:20.345 19:39:07 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:20.345 19:39:07 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:20.345 Cannot find device "nvmf_tgt_br" 00:18:20.345 19:39:07 -- nvmf/common.sh@157 -- # true 00:18:20.345 19:39:07 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:20.345 Cannot find device "nvmf_tgt_br2" 00:18:20.345 19:39:07 -- nvmf/common.sh@158 -- # true 00:18:20.345 19:39:07 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:20.618 19:39:07 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:20.618 19:39:07 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:20.618 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:20.618 19:39:07 -- nvmf/common.sh@161 -- # true 00:18:20.618 19:39:07 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:20.618 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:20.618 19:39:07 -- nvmf/common.sh@162 -- # true 00:18:20.618 19:39:07 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:20.618 19:39:07 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:20.618 19:39:07 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:20.618 19:39:07 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:20.618 19:39:07 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:20.618 19:39:07 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:20.618 19:39:07 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:20.618 19:39:07 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:20.618 19:39:07 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:20.618 19:39:07 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:20.618 19:39:07 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:20.618 19:39:07 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:20.618 19:39:07 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:20.618 19:39:07 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:20.618 19:39:07 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:20.618 19:39:07 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:20.618 19:39:07 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:20.618 19:39:07 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:20.618 19:39:07 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:20.618 19:39:07 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:20.618 19:39:07 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:20.618 19:39:07 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:20.618 19:39:07 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:20.618 19:39:07 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:20.618 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:20.618 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:18:20.618 00:18:20.618 --- 10.0.0.2 ping statistics --- 00:18:20.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.618 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:18:20.618 19:39:07 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:20.618 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:20.618 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:18:20.618 00:18:20.618 --- 10.0.0.3 ping statistics --- 00:18:20.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.618 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:18:20.618 19:39:07 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:20.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:20.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:18:20.618 00:18:20.618 --- 10.0.0.1 ping statistics --- 00:18:20.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.618 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:18:20.618 19:39:07 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:20.618 19:39:07 -- nvmf/common.sh@421 -- # return 0 00:18:20.618 19:39:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:20.618 19:39:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:20.618 19:39:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:20.618 19:39:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:20.618 19:39:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:20.619 19:39:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:20.619 19:39:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:20.619 19:39:07 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:18:20.619 19:39:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:20.619 19:39:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:20.619 19:39:07 -- common/autotest_common.sh@10 -- # set +x 00:18:20.619 19:39:07 -- nvmf/common.sh@469 -- # nvmfpid=90518 00:18:20.619 19:39:07 -- nvmf/common.sh@470 -- # waitforlisten 90518 00:18:20.619 19:39:07 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:20.619 19:39:07 -- common/autotest_common.sh@829 -- # '[' -z 90518 ']' 00:18:20.619 19:39:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.619 19:39:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:20.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.619 19:39:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.619 19:39:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:20.619 19:39:07 -- common/autotest_common.sh@10 -- # set +x 00:18:20.878 [2024-12-15 19:39:07.535767] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:20.878 [2024-12-15 19:39:07.535869] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:20.878 [2024-12-15 19:39:07.679612] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:21.138 [2024-12-15 19:39:07.773991] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:21.138 [2024-12-15 19:39:07.774186] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:21.138 [2024-12-15 19:39:07.774204] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:21.138 [2024-12-15 19:39:07.774225] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:21.138 [2024-12-15 19:39:07.774386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:21.138 [2024-12-15 19:39:07.774477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:21.138 [2024-12-15 19:39:07.775301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:21.138 [2024-12-15 19:39:07.775361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.706 19:39:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:21.706 19:39:08 -- common/autotest_common.sh@862 -- # return 0 00:18:21.706 19:39:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:21.706 19:39:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:21.706 19:39:08 -- common/autotest_common.sh@10 -- # set +x 00:18:21.706 19:39:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:21.706 19:39:08 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:21.706 19:39:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.706 19:39:08 -- common/autotest_common.sh@10 -- # set +x 00:18:21.706 [2024-12-15 19:39:08.601396] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:21.966 19:39:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.966 19:39:08 -- target/multiconnection.sh@21 -- # seq 1 11 00:18:21.966 19:39:08 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:21.966 19:39:08 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:21.966 19:39:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.966 19:39:08 -- common/autotest_common.sh@10 -- # set +x 00:18:21.966 Malloc1 00:18:21.966 19:39:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.966 19:39:08 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:18:21.966 19:39:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.966 19:39:08 -- common/autotest_common.sh@10 -- # set +x 00:18:21.966 19:39:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.966 19:39:08 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:21.966 19:39:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.966 19:39:08 -- common/autotest_common.sh@10 -- # set +x 00:18:21.966 19:39:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.966 19:39:08 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:21.966 19:39:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.966 19:39:08 -- common/autotest_common.sh@10 -- # set +x 00:18:21.966 [2024-12-15 19:39:08.679630] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:21.966 19:39:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.966 19:39:08 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:21.966 19:39:08 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:18:21.966 19:39:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.966 19:39:08 -- common/autotest_common.sh@10 -- # set +x 00:18:21.966 Malloc2 00:18:21.966 19:39:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.966 19:39:08 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:21.966 19:39:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.966 19:39:08 -- common/autotest_common.sh@10 -- # set +x 00:18:21.966 19:39:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.966 19:39:08 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:18:21.966 19:39:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.966 19:39:08 -- common/autotest_common.sh@10 -- # set +x 00:18:21.966 19:39:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.966 19:39:08 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:21.966 19:39:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.966 19:39:08 -- common/autotest_common.sh@10 -- # set +x 00:18:21.966 19:39:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.966 19:39:08 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:21.966 19:39:08 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:18:21.966 19:39:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.966 19:39:08 -- common/autotest_common.sh@10 -- # set +x 00:18:21.966 Malloc3 00:18:21.966 19:39:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.966 19:39:08 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:18:21.966 19:39:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.966 19:39:08 -- common/autotest_common.sh@10 -- # set +x 00:18:21.966 19:39:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.966 19:39:08 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:18:21.966 19:39:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.966 19:39:08 -- common/autotest_common.sh@10 -- # set +x 00:18:21.966 19:39:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.966 19:39:08 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:18:21.966 19:39:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.966 19:39:08 -- common/autotest_common.sh@10 -- # set +x 00:18:21.966 19:39:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.966 19:39:08 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:21.966 19:39:08 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:18:21.966 19:39:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.966 19:39:08 -- common/autotest_common.sh@10 -- # set +x 00:18:21.966 Malloc4 00:18:21.966 19:39:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.966 19:39:08 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:18:21.966 19:39:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.966 19:39:08 -- common/autotest_common.sh@10 -- # set +x 00:18:21.966 19:39:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.966 19:39:08 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:18:21.966 19:39:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.966 19:39:08 -- common/autotest_common.sh@10 -- # set +x 00:18:21.966 19:39:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.966 19:39:08 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:18:21.966 19:39:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.966 19:39:08 -- common/autotest_common.sh@10 -- # set +x 00:18:21.966 19:39:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.966 19:39:08 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:21.966 19:39:08 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:18:21.966 19:39:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.966 19:39:08 -- common/autotest_common.sh@10 -- # set +x 00:18:22.226 Malloc5 00:18:22.226 19:39:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.226 19:39:08 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:18:22.226 19:39:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.227 19:39:08 -- common/autotest_common.sh@10 -- # set +x 00:18:22.227 19:39:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.227 19:39:08 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:18:22.227 19:39:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.227 19:39:08 -- common/autotest_common.sh@10 -- # set +x 00:18:22.227 19:39:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.227 19:39:08 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:18:22.227 19:39:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.227 19:39:08 -- common/autotest_common.sh@10 -- # set +x 00:18:22.227 19:39:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.227 19:39:08 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:22.227 19:39:08 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:18:22.227 19:39:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.227 19:39:08 -- common/autotest_common.sh@10 -- # set +x 00:18:22.227 Malloc6 00:18:22.227 19:39:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.227 19:39:08 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:18:22.227 19:39:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.227 19:39:08 -- common/autotest_common.sh@10 -- # set +x 00:18:22.227 19:39:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.227 19:39:08 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:18:22.227 19:39:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.227 19:39:08 -- common/autotest_common.sh@10 -- # set +x 00:18:22.227 19:39:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.227 19:39:08 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:18:22.227 19:39:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.227 19:39:08 -- common/autotest_common.sh@10 -- # set +x 00:18:22.227 19:39:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.227 19:39:08 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:22.227 19:39:08 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:18:22.227 19:39:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.227 19:39:08 -- common/autotest_common.sh@10 -- # set +x 00:18:22.227 Malloc7 00:18:22.227 19:39:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.227 19:39:08 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:18:22.227 19:39:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.227 19:39:08 -- common/autotest_common.sh@10 -- # set +x 00:18:22.227 19:39:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.227 19:39:09 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:18:22.227 19:39:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.227 19:39:09 -- common/autotest_common.sh@10 -- # set +x 00:18:22.227 19:39:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.227 19:39:09 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:18:22.227 19:39:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.227 19:39:09 -- common/autotest_common.sh@10 -- # set +x 00:18:22.227 19:39:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.227 19:39:09 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:22.227 19:39:09 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:18:22.227 19:39:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.227 19:39:09 -- common/autotest_common.sh@10 -- # set +x 00:18:22.227 Malloc8 00:18:22.227 19:39:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.227 19:39:09 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:18:22.227 19:39:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.227 19:39:09 -- common/autotest_common.sh@10 -- # set +x 00:18:22.227 19:39:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.227 19:39:09 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:18:22.227 19:39:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.227 19:39:09 -- common/autotest_common.sh@10 -- # set +x 00:18:22.227 19:39:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.227 19:39:09 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:18:22.227 19:39:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.227 19:39:09 -- common/autotest_common.sh@10 -- # set +x 00:18:22.227 19:39:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.227 19:39:09 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:22.227 19:39:09 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:18:22.227 19:39:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.227 19:39:09 -- common/autotest_common.sh@10 -- # set +x 00:18:22.487 Malloc9 00:18:22.487 19:39:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.487 19:39:09 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:18:22.487 19:39:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.487 19:39:09 -- common/autotest_common.sh@10 -- # set +x 00:18:22.487 19:39:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.487 19:39:09 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:18:22.487 19:39:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.487 19:39:09 -- common/autotest_common.sh@10 -- # set +x 00:18:22.487 19:39:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.487 19:39:09 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:18:22.487 19:39:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.487 19:39:09 -- common/autotest_common.sh@10 -- # set +x 00:18:22.487 19:39:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.487 19:39:09 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:22.487 19:39:09 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:18:22.487 19:39:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.487 19:39:09 -- common/autotest_common.sh@10 -- # set +x 00:18:22.487 Malloc10 00:18:22.487 19:39:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.487 19:39:09 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:18:22.487 19:39:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.487 19:39:09 -- common/autotest_common.sh@10 -- # set +x 00:18:22.487 19:39:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.487 19:39:09 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:18:22.487 19:39:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.487 19:39:09 -- common/autotest_common.sh@10 -- # set +x 00:18:22.487 19:39:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.487 19:39:09 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:18:22.487 19:39:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.487 19:39:09 -- common/autotest_common.sh@10 -- # set +x 00:18:22.487 19:39:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.487 19:39:09 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:22.487 19:39:09 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:18:22.487 19:39:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.487 19:39:09 -- common/autotest_common.sh@10 -- # set +x 00:18:22.487 Malloc11 00:18:22.487 19:39:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.487 19:39:09 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:18:22.487 19:39:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.487 19:39:09 -- common/autotest_common.sh@10 -- # set +x 00:18:22.487 19:39:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.487 19:39:09 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:18:22.487 19:39:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.487 19:39:09 -- common/autotest_common.sh@10 -- # set +x 00:18:22.487 19:39:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.487 19:39:09 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:18:22.487 19:39:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.487 19:39:09 -- common/autotest_common.sh@10 -- # set +x 00:18:22.487 19:39:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.487 19:39:09 -- target/multiconnection.sh@28 -- # seq 1 11 00:18:22.487 19:39:09 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:22.487 19:39:09 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 --hostid=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:22.746 19:39:09 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:18:22.746 19:39:09 -- common/autotest_common.sh@1187 -- # local i=0 00:18:22.746 19:39:09 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:22.746 19:39:09 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:22.746 19:39:09 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:24.650 19:39:11 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:24.650 19:39:11 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:24.650 19:39:11 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:18:24.650 19:39:11 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:24.650 19:39:11 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:24.651 19:39:11 -- common/autotest_common.sh@1197 -- # return 0 00:18:24.651 19:39:11 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:24.651 19:39:11 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 --hostid=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:18:24.909 19:39:11 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:18:24.909 19:39:11 -- common/autotest_common.sh@1187 -- # local i=0 00:18:24.909 19:39:11 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:24.909 19:39:11 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:24.909 19:39:11 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:26.815 19:39:13 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:26.815 19:39:13 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:26.815 19:39:13 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:18:26.815 19:39:13 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:26.815 19:39:13 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:26.815 19:39:13 -- common/autotest_common.sh@1197 -- # return 0 00:18:26.815 19:39:13 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:26.815 19:39:13 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 --hostid=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:18:27.074 19:39:13 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:18:27.074 19:39:13 -- common/autotest_common.sh@1187 -- # local i=0 00:18:27.074 19:39:13 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:27.074 19:39:13 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:27.074 19:39:13 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:28.979 19:39:15 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:28.979 19:39:15 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:18:28.979 19:39:15 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:29.239 19:39:15 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:29.239 19:39:15 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:29.239 19:39:15 -- common/autotest_common.sh@1197 -- # return 0 00:18:29.239 19:39:15 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:29.239 19:39:15 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 --hostid=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:18:29.239 19:39:16 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:18:29.239 19:39:16 -- common/autotest_common.sh@1187 -- # local i=0 00:18:29.239 19:39:16 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:29.239 19:39:16 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:29.239 19:39:16 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:31.770 19:39:18 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:31.770 19:39:18 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:31.770 19:39:18 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:18:31.770 19:39:18 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:31.770 19:39:18 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:31.770 19:39:18 -- common/autotest_common.sh@1197 -- # return 0 00:18:31.770 19:39:18 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:31.770 19:39:18 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 --hostid=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:18:31.770 19:39:18 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:18:31.770 19:39:18 -- common/autotest_common.sh@1187 -- # local i=0 00:18:31.770 19:39:18 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:31.770 19:39:18 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:31.770 19:39:18 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:33.675 19:39:20 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:33.675 19:39:20 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:33.675 19:39:20 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:18:33.675 19:39:20 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:33.675 19:39:20 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:33.675 19:39:20 -- common/autotest_common.sh@1197 -- # return 0 00:18:33.675 19:39:20 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:33.675 19:39:20 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 --hostid=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:18:33.675 19:39:20 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:18:33.675 19:39:20 -- common/autotest_common.sh@1187 -- # local i=0 00:18:33.675 19:39:20 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:33.675 19:39:20 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:33.675 19:39:20 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:35.579 19:39:22 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:35.579 19:39:22 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:35.579 19:39:22 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:18:35.579 19:39:22 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:35.579 19:39:22 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:35.579 19:39:22 -- common/autotest_common.sh@1197 -- # return 0 00:18:35.579 19:39:22 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:35.579 19:39:22 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 --hostid=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:18:35.866 19:39:22 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:18:35.866 19:39:22 -- common/autotest_common.sh@1187 -- # local i=0 00:18:35.866 19:39:22 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:35.866 19:39:22 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:35.866 19:39:22 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:37.811 19:39:24 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:37.811 19:39:24 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:37.811 19:39:24 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:18:37.811 19:39:24 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:37.811 19:39:24 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:37.811 19:39:24 -- common/autotest_common.sh@1197 -- # return 0 00:18:37.811 19:39:24 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:37.811 19:39:24 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 --hostid=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:18:38.070 19:39:24 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:18:38.070 19:39:24 -- common/autotest_common.sh@1187 -- # local i=0 00:18:38.070 19:39:24 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:38.070 19:39:24 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:38.070 19:39:24 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:39.974 19:39:26 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:39.974 19:39:26 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:40.233 19:39:26 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:18:40.233 19:39:26 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:40.233 19:39:26 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:40.233 19:39:26 -- common/autotest_common.sh@1197 -- # return 0 00:18:40.233 19:39:26 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:40.233 19:39:26 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 --hostid=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:18:40.233 19:39:27 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:18:40.233 19:39:27 -- common/autotest_common.sh@1187 -- # local i=0 00:18:40.233 19:39:27 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:40.233 19:39:27 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:40.233 19:39:27 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:42.772 19:39:29 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:42.772 19:39:29 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:42.772 19:39:29 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:18:42.772 19:39:29 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:42.772 19:39:29 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:42.772 19:39:29 -- common/autotest_common.sh@1197 -- # return 0 00:18:42.772 19:39:29 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:42.772 19:39:29 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 --hostid=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:18:42.772 19:39:29 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:18:42.772 19:39:29 -- common/autotest_common.sh@1187 -- # local i=0 00:18:42.772 19:39:29 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:42.772 19:39:29 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:42.772 19:39:29 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:44.677 19:39:31 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:44.677 19:39:31 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:44.677 19:39:31 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:18:44.677 19:39:31 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:44.677 19:39:31 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:44.677 19:39:31 -- common/autotest_common.sh@1197 -- # return 0 00:18:44.677 19:39:31 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:44.677 19:39:31 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 --hostid=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:18:44.677 19:39:31 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:18:44.677 19:39:31 -- common/autotest_common.sh@1187 -- # local i=0 00:18:44.677 19:39:31 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:44.677 19:39:31 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:44.677 19:39:31 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:47.207 19:39:33 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:47.207 19:39:33 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:47.207 19:39:33 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:18:47.207 19:39:33 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:47.207 19:39:33 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:47.208 19:39:33 -- common/autotest_common.sh@1197 -- # return 0 00:18:47.208 19:39:33 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:18:47.208 [global] 00:18:47.208 thread=1 00:18:47.208 invalidate=1 00:18:47.208 rw=read 00:18:47.208 time_based=1 00:18:47.208 runtime=10 00:18:47.208 ioengine=libaio 00:18:47.208 direct=1 00:18:47.208 bs=262144 00:18:47.208 iodepth=64 00:18:47.208 norandommap=1 00:18:47.208 numjobs=1 00:18:47.208 00:18:47.208 [job0] 00:18:47.208 filename=/dev/nvme0n1 00:18:47.208 [job1] 00:18:47.208 filename=/dev/nvme10n1 00:18:47.208 [job2] 00:18:47.208 filename=/dev/nvme1n1 00:18:47.208 [job3] 00:18:47.208 filename=/dev/nvme2n1 00:18:47.208 [job4] 00:18:47.208 filename=/dev/nvme3n1 00:18:47.208 [job5] 00:18:47.208 filename=/dev/nvme4n1 00:18:47.208 [job6] 00:18:47.208 filename=/dev/nvme5n1 00:18:47.208 [job7] 00:18:47.208 filename=/dev/nvme6n1 00:18:47.208 [job8] 00:18:47.208 filename=/dev/nvme7n1 00:18:47.208 [job9] 00:18:47.208 filename=/dev/nvme8n1 00:18:47.208 [job10] 00:18:47.208 filename=/dev/nvme9n1 00:18:47.208 Could not set queue depth (nvme0n1) 00:18:47.208 Could not set queue depth (nvme10n1) 00:18:47.208 Could not set queue depth (nvme1n1) 00:18:47.208 Could not set queue depth (nvme2n1) 00:18:47.208 Could not set queue depth (nvme3n1) 00:18:47.208 Could not set queue depth (nvme4n1) 00:18:47.208 Could not set queue depth (nvme5n1) 00:18:47.208 Could not set queue depth (nvme6n1) 00:18:47.208 Could not set queue depth (nvme7n1) 00:18:47.208 Could not set queue depth (nvme8n1) 00:18:47.208 Could not set queue depth (nvme9n1) 00:18:47.208 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:47.208 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:47.208 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:47.208 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:47.208 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:47.208 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:47.208 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:47.208 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:47.208 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:47.208 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:47.208 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:47.208 fio-3.35 00:18:47.208 Starting 11 threads 00:18:59.425 00:18:59.425 job0: (groupid=0, jobs=1): err= 0: pid=91001: Sun Dec 15 19:39:44 2024 00:18:59.425 read: IOPS=585, BW=146MiB/s (154MB/s)(1477MiB/10089msec) 00:18:59.425 slat (usec): min=18, max=76765, avg=1651.09, stdev=6284.54 00:18:59.425 clat (msec): min=18, max=242, avg=107.46, stdev=25.44 00:18:59.425 lat (msec): min=18, max=242, avg=109.11, stdev=26.35 00:18:59.425 clat percentiles (msec): 00:18:59.425 | 1.00th=[ 50], 5.00th=[ 72], 10.00th=[ 81], 20.00th=[ 88], 00:18:59.425 | 30.00th=[ 93], 40.00th=[ 100], 50.00th=[ 107], 60.00th=[ 113], 00:18:59.425 | 70.00th=[ 118], 80.00th=[ 123], 90.00th=[ 138], 95.00th=[ 153], 00:18:59.425 | 99.00th=[ 188], 99.50th=[ 201], 99.90th=[ 226], 99.95th=[ 232], 00:18:59.425 | 99.99th=[ 243] 00:18:59.425 bw ( KiB/s): min=112128, max=187904, per=9.09%, avg=149500.40, stdev=23435.37, samples=20 00:18:59.425 iops : min= 438, max= 734, avg=583.80, stdev=91.58, samples=20 00:18:59.425 lat (msec) : 20=0.05%, 50=1.15%, 100=40.10%, 250=58.70% 00:18:59.425 cpu : usr=0.21%, sys=2.00%, ctx=1048, majf=0, minf=4097 00:18:59.425 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:18:59.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.425 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:59.425 issued rwts: total=5908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.425 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:59.425 job1: (groupid=0, jobs=1): err= 0: pid=91002: Sun Dec 15 19:39:44 2024 00:18:59.425 read: IOPS=542, BW=136MiB/s (142MB/s)(1368MiB/10076msec) 00:18:59.425 slat (usec): min=20, max=112942, avg=1823.29, stdev=6535.67 00:18:59.425 clat (msec): min=60, max=234, avg=115.90, stdev=24.90 00:18:59.425 lat (msec): min=60, max=260, avg=117.72, stdev=25.88 00:18:59.425 clat percentiles (msec): 00:18:59.425 | 1.00th=[ 71], 5.00th=[ 82], 10.00th=[ 88], 20.00th=[ 94], 00:18:59.425 | 30.00th=[ 101], 40.00th=[ 107], 50.00th=[ 116], 60.00th=[ 122], 00:18:59.425 | 70.00th=[ 127], 80.00th=[ 133], 90.00th=[ 148], 95.00th=[ 161], 00:18:59.425 | 99.00th=[ 192], 99.50th=[ 199], 99.90th=[ 205], 99.95th=[ 215], 00:18:59.425 | 99.99th=[ 234] 00:18:59.425 bw ( KiB/s): min=93696, max=182784, per=8.41%, avg=138346.90, stdev=27720.75, samples=20 00:18:59.425 iops : min= 366, max= 714, avg=540.20, stdev=108.25, samples=20 00:18:59.425 lat (msec) : 100=30.01%, 250=69.99% 00:18:59.425 cpu : usr=0.20%, sys=2.15%, ctx=1004, majf=0, minf=4097 00:18:59.425 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:59.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.425 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:59.425 issued rwts: total=5471,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.425 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:59.425 job2: (groupid=0, jobs=1): err= 0: pid=91003: Sun Dec 15 19:39:44 2024 00:18:59.425 read: IOPS=671, BW=168MiB/s (176MB/s)(1700MiB/10124msec) 00:18:59.425 slat (usec): min=15, max=105667, avg=1413.48, stdev=5901.33 00:18:59.425 clat (msec): min=4, max=246, avg=93.72, stdev=37.86 00:18:59.425 lat (msec): min=4, max=246, avg=95.13, stdev=38.69 00:18:59.425 clat percentiles (msec): 00:18:59.425 | 1.00th=[ 16], 5.00th=[ 33], 10.00th=[ 50], 20.00th=[ 68], 00:18:59.425 | 30.00th=[ 78], 40.00th=[ 84], 50.00th=[ 88], 60.00th=[ 92], 00:18:59.425 | 70.00th=[ 99], 80.00th=[ 136], 90.00th=[ 148], 95.00th=[ 159], 00:18:59.425 | 99.00th=[ 188], 99.50th=[ 222], 99.90th=[ 247], 99.95th=[ 247], 00:18:59.425 | 99.99th=[ 247] 00:18:59.425 bw ( KiB/s): min=100352, max=334533, per=10.48%, avg=172397.75, stdev=61371.29, samples=20 00:18:59.425 iops : min= 392, max= 1306, avg=673.30, stdev=239.60, samples=20 00:18:59.425 lat (msec) : 10=0.51%, 20=0.88%, 50=8.94%, 100=61.42%, 250=28.24% 00:18:59.425 cpu : usr=0.30%, sys=2.07%, ctx=1412, majf=0, minf=4097 00:18:59.425 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:59.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.425 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:59.425 issued rwts: total=6799,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.425 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:59.425 job3: (groupid=0, jobs=1): err= 0: pid=91004: Sun Dec 15 19:39:44 2024 00:18:59.425 read: IOPS=598, BW=150MiB/s (157MB/s)(1508MiB/10080msec) 00:18:59.425 slat (usec): min=17, max=113470, avg=1611.25, stdev=6092.16 00:18:59.425 clat (msec): min=42, max=178, avg=105.20, stdev=20.56 00:18:59.425 lat (msec): min=42, max=236, avg=106.81, stdev=21.44 00:18:59.425 clat percentiles (msec): 00:18:59.425 | 1.00th=[ 64], 5.00th=[ 74], 10.00th=[ 80], 20.00th=[ 87], 00:18:59.425 | 30.00th=[ 93], 40.00th=[ 99], 50.00th=[ 105], 60.00th=[ 112], 00:18:59.425 | 70.00th=[ 117], 80.00th=[ 123], 90.00th=[ 130], 95.00th=[ 140], 00:18:59.425 | 99.00th=[ 157], 99.50th=[ 165], 99.90th=[ 169], 99.95th=[ 176], 00:18:59.425 | 99.99th=[ 180] 00:18:59.425 bw ( KiB/s): min=98816, max=211366, per=9.29%, avg=152784.00, stdev=27066.80, samples=20 00:18:59.425 iops : min= 386, max= 825, avg=596.70, stdev=105.66, samples=20 00:18:59.425 lat (msec) : 50=0.36%, 100=42.75%, 250=56.88% 00:18:59.425 cpu : usr=0.21%, sys=1.89%, ctx=1257, majf=0, minf=4097 00:18:59.425 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:59.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.425 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:59.425 issued rwts: total=6030,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.425 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:59.425 job4: (groupid=0, jobs=1): err= 0: pid=91005: Sun Dec 15 19:39:44 2024 00:18:59.425 read: IOPS=596, BW=149MiB/s (156MB/s)(1504MiB/10079msec) 00:18:59.425 slat (usec): min=13, max=113274, avg=1436.75, stdev=5873.60 00:18:59.425 clat (msec): min=4, max=254, avg=105.62, stdev=31.07 00:18:59.425 lat (msec): min=4, max=254, avg=107.06, stdev=31.81 00:18:59.425 clat percentiles (msec): 00:18:59.425 | 1.00th=[ 10], 5.00th=[ 61], 10.00th=[ 78], 20.00th=[ 85], 00:18:59.425 | 30.00th=[ 91], 40.00th=[ 99], 50.00th=[ 108], 60.00th=[ 113], 00:18:59.425 | 70.00th=[ 121], 80.00th=[ 128], 90.00th=[ 138], 95.00th=[ 155], 00:18:59.425 | 99.00th=[ 190], 99.50th=[ 194], 99.90th=[ 201], 99.95th=[ 209], 00:18:59.425 | 99.99th=[ 255] 00:18:59.425 bw ( KiB/s): min=82084, max=224705, per=9.26%, avg=152370.75, stdev=34437.03, samples=20 00:18:59.425 iops : min= 320, max= 877, avg=595.00, stdev=134.55, samples=20 00:18:59.425 lat (msec) : 10=1.56%, 20=0.81%, 50=2.29%, 100=37.96%, 250=57.35% 00:18:59.425 lat (msec) : 500=0.02% 00:18:59.425 cpu : usr=0.19%, sys=2.00%, ctx=1173, majf=0, minf=4097 00:18:59.425 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:59.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.425 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:59.425 issued rwts: total=6014,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.425 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:59.425 job5: (groupid=0, jobs=1): err= 0: pid=91006: Sun Dec 15 19:39:44 2024 00:18:59.425 read: IOPS=731, BW=183MiB/s (192MB/s)(1846MiB/10086msec) 00:18:59.425 slat (usec): min=19, max=56149, avg=1337.16, stdev=5046.29 00:18:59.425 clat (msec): min=5, max=184, avg=85.88, stdev=39.15 00:18:59.425 lat (msec): min=5, max=184, avg=87.22, stdev=39.97 00:18:59.425 clat percentiles (msec): 00:18:59.425 | 1.00th=[ 19], 5.00th=[ 26], 10.00th=[ 29], 20.00th=[ 35], 00:18:59.425 | 30.00th=[ 55], 40.00th=[ 89], 50.00th=[ 96], 60.00th=[ 106], 00:18:59.425 | 70.00th=[ 115], 80.00th=[ 123], 90.00th=[ 130], 95.00th=[ 136], 00:18:59.425 | 99.00th=[ 150], 99.50th=[ 155], 99.90th=[ 180], 99.95th=[ 180], 00:18:59.425 | 99.99th=[ 184] 00:18:59.425 bw ( KiB/s): min=123145, max=487984, per=11.38%, avg=187238.65, stdev=108592.67, samples=20 00:18:59.425 iops : min= 481, max= 1906, avg=731.25, stdev=424.21, samples=20 00:18:59.425 lat (msec) : 10=0.03%, 20=1.42%, 50=28.08%, 100=25.16%, 250=45.31% 00:18:59.425 cpu : usr=0.21%, sys=2.24%, ctx=1557, majf=0, minf=4097 00:18:59.425 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:18:59.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:59.426 issued rwts: total=7382,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.426 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:59.426 job6: (groupid=0, jobs=1): err= 0: pid=91007: Sun Dec 15 19:39:44 2024 00:18:59.426 read: IOPS=537, BW=134MiB/s (141MB/s)(1361MiB/10132msec) 00:18:59.426 slat (usec): min=14, max=126304, avg=1778.89, stdev=6952.83 00:18:59.426 clat (usec): min=536, max=271218, avg=117110.69, stdev=35233.46 00:18:59.426 lat (usec): min=818, max=271261, avg=118889.58, stdev=36244.87 00:18:59.426 clat percentiles (usec): 00:18:59.426 | 1.00th=[ 1045], 5.00th=[ 54264], 10.00th=[ 65799], 20.00th=[ 96994], 00:18:59.426 | 30.00th=[110625], 40.00th=[114820], 50.00th=[120062], 60.00th=[127402], 00:18:59.426 | 70.00th=[135267], 80.00th=[143655], 90.00th=[154141], 95.00th=[162530], 00:18:59.426 | 99.00th=[189793], 99.50th=[223347], 99.90th=[242222], 99.95th=[242222], 00:18:59.426 | 99.99th=[270533] 00:18:59.426 bw ( KiB/s): min=101888, max=238592, per=8.37%, avg=137620.40, stdev=35414.28, samples=20 00:18:59.426 iops : min= 398, max= 932, avg=537.45, stdev=138.31, samples=20 00:18:59.426 lat (usec) : 750=0.02%, 1000=0.79% 00:18:59.426 lat (msec) : 2=0.61%, 10=0.04%, 20=0.28%, 50=1.76%, 100=17.28% 00:18:59.426 lat (msec) : 250=79.19%, 500=0.04% 00:18:59.426 cpu : usr=0.18%, sys=1.70%, ctx=1212, majf=0, minf=4097 00:18:59.426 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:59.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:59.426 issued rwts: total=5445,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.426 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:59.426 job7: (groupid=0, jobs=1): err= 0: pid=91008: Sun Dec 15 19:39:44 2024 00:18:59.426 read: IOPS=619, BW=155MiB/s (162MB/s)(1570MiB/10131msec) 00:18:59.426 slat (usec): min=14, max=87770, avg=1538.44, stdev=5828.50 00:18:59.426 clat (msec): min=22, max=270, avg=101.49, stdev=36.22 00:18:59.426 lat (msec): min=22, max=270, avg=103.02, stdev=37.07 00:18:59.426 clat percentiles (msec): 00:18:59.426 | 1.00th=[ 34], 5.00th=[ 50], 10.00th=[ 65], 20.00th=[ 79], 00:18:59.426 | 30.00th=[ 84], 40.00th=[ 88], 50.00th=[ 91], 60.00th=[ 97], 00:18:59.426 | 70.00th=[ 107], 80.00th=[ 142], 90.00th=[ 155], 95.00th=[ 163], 00:18:59.426 | 99.00th=[ 188], 99.50th=[ 232], 99.90th=[ 271], 99.95th=[ 271], 00:18:59.426 | 99.99th=[ 271] 00:18:59.426 bw ( KiB/s): min=94530, max=305053, per=9.67%, avg=158989.95, stdev=53810.72, samples=20 00:18:59.426 iops : min= 369, max= 1191, avg=620.95, stdev=210.10, samples=20 00:18:59.426 lat (msec) : 50=5.05%, 100=59.08%, 250=35.43%, 500=0.45% 00:18:59.426 cpu : usr=0.23%, sys=1.94%, ctx=1226, majf=0, minf=4097 00:18:59.426 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:59.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:59.426 issued rwts: total=6280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.426 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:59.426 job8: (groupid=0, jobs=1): err= 0: pid=91009: Sun Dec 15 19:39:44 2024 00:18:59.426 read: IOPS=571, BW=143MiB/s (150MB/s)(1440MiB/10087msec) 00:18:59.426 slat (usec): min=15, max=92123, avg=1626.88, stdev=6321.80 00:18:59.426 clat (msec): min=2, max=276, avg=110.23, stdev=31.31 00:18:59.426 lat (msec): min=3, max=276, avg=111.86, stdev=31.97 00:18:59.426 clat percentiles (msec): 00:18:59.426 | 1.00th=[ 16], 5.00th=[ 74], 10.00th=[ 80], 20.00th=[ 89], 00:18:59.426 | 30.00th=[ 94], 40.00th=[ 102], 50.00th=[ 110], 60.00th=[ 115], 00:18:59.426 | 70.00th=[ 122], 80.00th=[ 129], 90.00th=[ 144], 95.00th=[ 167], 00:18:59.426 | 99.00th=[ 207], 99.50th=[ 222], 99.90th=[ 266], 99.95th=[ 271], 00:18:59.426 | 99.99th=[ 275] 00:18:59.426 bw ( KiB/s): min=81920, max=184320, per=8.86%, avg=145693.20, stdev=30349.65, samples=20 00:18:59.426 iops : min= 320, max= 720, avg=568.95, stdev=118.56, samples=20 00:18:59.426 lat (msec) : 4=0.24%, 10=0.50%, 20=0.68%, 50=0.45%, 100=37.40% 00:18:59.426 lat (msec) : 250=60.35%, 500=0.38% 00:18:59.426 cpu : usr=0.18%, sys=1.86%, ctx=1112, majf=0, minf=4097 00:18:59.426 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:18:59.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:59.426 issued rwts: total=5760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.426 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:59.426 job9: (groupid=0, jobs=1): err= 0: pid=91010: Sun Dec 15 19:39:44 2024 00:18:59.426 read: IOPS=532, BW=133MiB/s (140MB/s)(1348MiB/10126msec) 00:18:59.426 slat (usec): min=20, max=103498, avg=1788.39, stdev=6535.83 00:18:59.426 clat (msec): min=10, max=337, avg=118.16, stdev=40.20 00:18:59.426 lat (msec): min=10, max=347, avg=119.95, stdev=41.08 00:18:59.426 clat percentiles (msec): 00:18:59.426 | 1.00th=[ 51], 5.00th=[ 77], 10.00th=[ 82], 20.00th=[ 88], 00:18:59.426 | 30.00th=[ 92], 40.00th=[ 95], 50.00th=[ 101], 60.00th=[ 114], 00:18:59.426 | 70.00th=[ 146], 80.00th=[ 155], 90.00th=[ 171], 95.00th=[ 188], 00:18:59.426 | 99.00th=[ 245], 99.50th=[ 266], 99.90th=[ 317], 99.95th=[ 338], 00:18:59.426 | 99.99th=[ 338] 00:18:59.426 bw ( KiB/s): min=86183, max=183808, per=8.29%, avg=136425.90, stdev=36744.13, samples=20 00:18:59.426 iops : min= 336, max= 718, avg=532.80, stdev=143.51, samples=20 00:18:59.426 lat (msec) : 20=0.35%, 50=0.61%, 100=48.67%, 250=49.47%, 500=0.89% 00:18:59.426 cpu : usr=0.18%, sys=1.72%, ctx=1168, majf=0, minf=4097 00:18:59.426 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:59.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:59.426 issued rwts: total=5393,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.426 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:59.426 job10: (groupid=0, jobs=1): err= 0: pid=91011: Sun Dec 15 19:39:44 2024 00:18:59.426 read: IOPS=455, BW=114MiB/s (119MB/s)(1154MiB/10129msec) 00:18:59.426 slat (usec): min=14, max=94753, avg=2133.09, stdev=7581.54 00:18:59.426 clat (msec): min=23, max=270, avg=138.01, stdev=26.00 00:18:59.426 lat (msec): min=24, max=270, avg=140.14, stdev=27.16 00:18:59.426 clat percentiles (msec): 00:18:59.426 | 1.00th=[ 31], 5.00th=[ 107], 10.00th=[ 113], 20.00th=[ 118], 00:18:59.426 | 30.00th=[ 124], 40.00th=[ 129], 50.00th=[ 138], 60.00th=[ 146], 00:18:59.426 | 70.00th=[ 153], 80.00th=[ 159], 90.00th=[ 165], 95.00th=[ 174], 00:18:59.426 | 99.00th=[ 203], 99.50th=[ 215], 99.90th=[ 271], 99.95th=[ 271], 00:18:59.426 | 99.99th=[ 271] 00:18:59.426 bw ( KiB/s): min=83623, max=145117, per=7.08%, avg=116438.85, stdev=15210.65, samples=20 00:18:59.426 iops : min= 326, max= 566, avg=454.75, stdev=59.41, samples=20 00:18:59.426 lat (msec) : 50=1.37%, 100=1.13%, 250=97.18%, 500=0.33% 00:18:59.426 cpu : usr=0.21%, sys=1.42%, ctx=991, majf=0, minf=4097 00:18:59.426 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:18:59.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:59.426 issued rwts: total=4615,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.426 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:59.426 00:18:59.426 Run status group 0 (all jobs): 00:18:59.426 READ: bw=1606MiB/s (1684MB/s), 114MiB/s-183MiB/s (119MB/s-192MB/s), io=15.9GiB (17.1GB), run=10076-10132msec 00:18:59.426 00:18:59.426 Disk stats (read/write): 00:18:59.426 nvme0n1: ios=11700/0, merge=0/0, ticks=1242094/0, in_queue=1242094, util=97.61% 00:18:59.426 nvme10n1: ios=10815/0, merge=0/0, ticks=1239748/0, in_queue=1239748, util=97.77% 00:18:59.426 nvme1n1: ios=13471/0, merge=0/0, ticks=1234328/0, in_queue=1234328, util=97.76% 00:18:59.426 nvme2n1: ios=11932/0, merge=0/0, ticks=1238554/0, in_queue=1238554, util=97.60% 00:18:59.426 nvme3n1: ios=11901/0, merge=0/0, ticks=1244765/0, in_queue=1244765, util=97.94% 00:18:59.426 nvme4n1: ios=14637/0, merge=0/0, ticks=1235213/0, in_queue=1235213, util=98.22% 00:18:59.426 nvme5n1: ios=10763/0, merge=0/0, ticks=1237097/0, in_queue=1237097, util=98.24% 00:18:59.426 nvme6n1: ios=12467/0, merge=0/0, ticks=1237972/0, in_queue=1237972, util=98.33% 00:18:59.426 nvme7n1: ios=11392/0, merge=0/0, ticks=1239651/0, in_queue=1239651, util=98.63% 00:18:59.426 nvme8n1: ios=10659/0, merge=0/0, ticks=1235346/0, in_queue=1235346, util=98.47% 00:18:59.426 nvme9n1: ios=9125/0, merge=0/0, ticks=1241355/0, in_queue=1241355, util=98.79% 00:18:59.426 19:39:44 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:18:59.426 [global] 00:18:59.426 thread=1 00:18:59.426 invalidate=1 00:18:59.426 rw=randwrite 00:18:59.426 time_based=1 00:18:59.426 runtime=10 00:18:59.426 ioengine=libaio 00:18:59.426 direct=1 00:18:59.426 bs=262144 00:18:59.426 iodepth=64 00:18:59.426 norandommap=1 00:18:59.426 numjobs=1 00:18:59.426 00:18:59.426 [job0] 00:18:59.426 filename=/dev/nvme0n1 00:18:59.427 [job1] 00:18:59.427 filename=/dev/nvme10n1 00:18:59.427 [job2] 00:18:59.427 filename=/dev/nvme1n1 00:18:59.427 [job3] 00:18:59.427 filename=/dev/nvme2n1 00:18:59.427 [job4] 00:18:59.427 filename=/dev/nvme3n1 00:18:59.427 [job5] 00:18:59.427 filename=/dev/nvme4n1 00:18:59.427 [job6] 00:18:59.427 filename=/dev/nvme5n1 00:18:59.427 [job7] 00:18:59.427 filename=/dev/nvme6n1 00:18:59.427 [job8] 00:18:59.427 filename=/dev/nvme7n1 00:18:59.427 [job9] 00:18:59.427 filename=/dev/nvme8n1 00:18:59.427 [job10] 00:18:59.427 filename=/dev/nvme9n1 00:18:59.427 Could not set queue depth (nvme0n1) 00:18:59.427 Could not set queue depth (nvme10n1) 00:18:59.427 Could not set queue depth (nvme1n1) 00:18:59.427 Could not set queue depth (nvme2n1) 00:18:59.427 Could not set queue depth (nvme3n1) 00:18:59.427 Could not set queue depth (nvme4n1) 00:18:59.427 Could not set queue depth (nvme5n1) 00:18:59.427 Could not set queue depth (nvme6n1) 00:18:59.427 Could not set queue depth (nvme7n1) 00:18:59.427 Could not set queue depth (nvme8n1) 00:18:59.427 Could not set queue depth (nvme9n1) 00:18:59.427 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:59.427 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:59.427 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:59.427 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:59.427 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:59.427 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:59.427 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:59.427 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:59.427 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:59.427 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:59.427 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:59.427 fio-3.35 00:18:59.427 Starting 11 threads 00:19:09.411 00:19:09.411 job0: (groupid=0, jobs=1): err= 0: pid=91212: Sun Dec 15 19:39:55 2024 00:19:09.411 write: IOPS=416, BW=104MiB/s (109MB/s)(1056MiB/10147msec); 0 zone resets 00:19:09.411 slat (usec): min=19, max=25129, avg=2362.71, stdev=4056.51 00:19:09.411 clat (msec): min=3, max=304, avg=151.27, stdev=19.79 00:19:09.411 lat (msec): min=3, max=304, avg=153.63, stdev=19.67 00:19:09.411 clat percentiles (msec): 00:19:09.411 | 1.00th=[ 83], 5.00th=[ 134], 10.00th=[ 136], 20.00th=[ 142], 00:19:09.411 | 30.00th=[ 144], 40.00th=[ 144], 50.00th=[ 150], 60.00th=[ 157], 00:19:09.411 | 70.00th=[ 163], 80.00th=[ 165], 90.00th=[ 167], 95.00th=[ 169], 00:19:09.411 | 99.00th=[ 190], 99.50th=[ 243], 99.90th=[ 296], 99.95th=[ 296], 00:19:09.411 | 99.99th=[ 305] 00:19:09.411 bw ( KiB/s): min=96256, max=115712, per=6.71%, avg=106547.20, stdev=7693.28, samples=20 00:19:09.411 iops : min= 376, max= 452, avg=416.20, stdev=30.05, samples=20 00:19:09.411 lat (msec) : 4=0.07%, 10=0.17%, 20=0.19%, 50=0.28%, 100=0.57% 00:19:09.411 lat (msec) : 250=98.22%, 500=0.50% 00:19:09.411 cpu : usr=1.39%, sys=1.30%, ctx=5259, majf=0, minf=1 00:19:09.411 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:19:09.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:09.411 issued rwts: total=0,4225,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.411 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:09.411 job1: (groupid=0, jobs=1): err= 0: pid=91213: Sun Dec 15 19:39:55 2024 00:19:09.411 write: IOPS=1409, BW=352MiB/s (370MB/s)(3540MiB/10042msec); 0 zone resets 00:19:09.411 slat (usec): min=22, max=9700, avg=702.10, stdev=1196.75 00:19:09.411 clat (msec): min=13, max=116, avg=44.67, stdev=10.05 00:19:09.411 lat (msec): min=13, max=117, avg=45.37, stdev=10.20 00:19:09.411 clat percentiles (msec): 00:19:09.411 | 1.00th=[ 40], 5.00th=[ 40], 10.00th=[ 40], 20.00th=[ 41], 00:19:09.411 | 30.00th=[ 42], 40.00th=[ 43], 50.00th=[ 44], 60.00th=[ 45], 00:19:09.411 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 48], 95.00th=[ 48], 00:19:09.411 | 99.00th=[ 112], 99.50th=[ 115], 99.90th=[ 117], 99.95th=[ 117], 00:19:09.411 | 99.99th=[ 117] 00:19:09.411 bw ( KiB/s): min=141312, max=400384, per=22.73%, avg=360823.15, stdev=55541.45, samples=20 00:19:09.411 iops : min= 552, max= 1564, avg=1409.45, stdev=216.96, samples=20 00:19:09.411 lat (msec) : 20=0.03%, 50=97.17%, 100=0.98%, 250=1.82% 00:19:09.411 cpu : usr=3.66%, sys=3.22%, ctx=17972, majf=0, minf=1 00:19:09.411 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:19:09.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:09.411 issued rwts: total=0,14159,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.411 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:09.411 job2: (groupid=0, jobs=1): err= 0: pid=91225: Sun Dec 15 19:39:55 2024 00:19:09.411 write: IOPS=416, BW=104MiB/s (109MB/s)(1057MiB/10141msec); 0 zone resets 00:19:09.411 slat (usec): min=19, max=11775, avg=2362.93, stdev=4037.28 00:19:09.411 clat (msec): min=7, max=300, avg=151.13, stdev=18.60 00:19:09.411 lat (msec): min=7, max=300, avg=153.49, stdev=18.43 00:19:09.411 clat percentiles (msec): 00:19:09.411 | 1.00th=[ 102], 5.00th=[ 134], 10.00th=[ 136], 20.00th=[ 142], 00:19:09.411 | 30.00th=[ 144], 40.00th=[ 144], 50.00th=[ 148], 60.00th=[ 157], 00:19:09.411 | 70.00th=[ 163], 80.00th=[ 165], 90.00th=[ 167], 95.00th=[ 169], 00:19:09.411 | 99.00th=[ 188], 99.50th=[ 249], 99.90th=[ 292], 99.95th=[ 292], 00:19:09.411 | 99.99th=[ 300] 00:19:09.411 bw ( KiB/s): min=96256, max=117760, per=6.71%, avg=106562.75, stdev=7937.29, samples=20 00:19:09.411 iops : min= 376, max= 460, avg=416.25, stdev=31.01, samples=20 00:19:09.411 lat (msec) : 10=0.05%, 20=0.09%, 50=0.28%, 100=0.57%, 250=98.58% 00:19:09.411 lat (msec) : 500=0.43% 00:19:09.411 cpu : usr=1.07%, sys=1.07%, ctx=6963, majf=0, minf=1 00:19:09.411 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:19:09.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:09.411 issued rwts: total=0,4226,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.411 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:09.411 job3: (groupid=0, jobs=1): err= 0: pid=91226: Sun Dec 15 19:39:55 2024 00:19:09.411 write: IOPS=414, BW=104MiB/s (109MB/s)(1051MiB/10141msec); 0 zone resets 00:19:09.411 slat (usec): min=17, max=26150, avg=2373.41, stdev=4059.78 00:19:09.411 clat (msec): min=5, max=294, avg=151.94, stdev=18.07 00:19:09.411 lat (msec): min=5, max=305, avg=154.31, stdev=17.88 00:19:09.411 clat percentiles (msec): 00:19:09.411 | 1.00th=[ 118], 5.00th=[ 134], 10.00th=[ 136], 20.00th=[ 142], 00:19:09.411 | 30.00th=[ 144], 40.00th=[ 146], 50.00th=[ 150], 60.00th=[ 157], 00:19:09.411 | 70.00th=[ 163], 80.00th=[ 165], 90.00th=[ 167], 95.00th=[ 169], 00:19:09.411 | 99.00th=[ 192], 99.50th=[ 245], 99.90th=[ 284], 99.95th=[ 296], 00:19:09.411 | 99.99th=[ 296] 00:19:09.411 bw ( KiB/s): min=96768, max=114688, per=6.68%, avg=105999.75, stdev=7343.58, samples=20 00:19:09.411 iops : min= 378, max= 448, avg=414.05, stdev=28.70, samples=20 00:19:09.411 lat (msec) : 10=0.02%, 20=0.10%, 50=0.38%, 100=0.29%, 250=98.74% 00:19:09.411 lat (msec) : 500=0.48% 00:19:09.411 cpu : usr=1.29%, sys=1.16%, ctx=5170, majf=0, minf=1 00:19:09.411 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:19:09.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:09.411 issued rwts: total=0,4204,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.411 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:09.411 job4: (groupid=0, jobs=1): err= 0: pid=91227: Sun Dec 15 19:39:55 2024 00:19:09.411 write: IOPS=505, BW=126MiB/s (133MB/s)(1283MiB/10152msec); 0 zone resets 00:19:09.411 slat (usec): min=23, max=30413, avg=1945.22, stdev=3439.49 00:19:09.411 clat (msec): min=5, max=300, avg=124.54, stdev=30.41 00:19:09.411 lat (msec): min=5, max=300, avg=126.48, stdev=30.66 00:19:09.411 clat percentiles (msec): 00:19:09.411 | 1.00th=[ 66], 5.00th=[ 73], 10.00th=[ 102], 20.00th=[ 107], 00:19:09.411 | 30.00th=[ 109], 40.00th=[ 111], 50.00th=[ 113], 60.00th=[ 120], 00:19:09.411 | 70.00th=[ 153], 80.00th=[ 159], 90.00th=[ 161], 95.00th=[ 161], 00:19:09.411 | 99.00th=[ 182], 99.50th=[ 241], 99.90th=[ 292], 99.95th=[ 292], 00:19:09.411 | 99.99th=[ 300] 00:19:09.411 bw ( KiB/s): min=102400, max=212480, per=8.17%, avg=129755.95, stdev=29249.75, samples=20 00:19:09.411 iops : min= 400, max= 830, avg=506.85, stdev=114.27, samples=20 00:19:09.411 lat (msec) : 10=0.10%, 20=0.08%, 50=0.16%, 100=9.08%, 250=90.16% 00:19:09.411 lat (msec) : 500=0.43% 00:19:09.411 cpu : usr=1.49%, sys=1.28%, ctx=4162, majf=0, minf=1 00:19:09.411 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:09.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:09.411 issued rwts: total=0,5132,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.411 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:09.411 job5: (groupid=0, jobs=1): err= 0: pid=91228: Sun Dec 15 19:39:55 2024 00:19:09.411 write: IOPS=494, BW=124MiB/s (130MB/s)(1253MiB/10144msec); 0 zone resets 00:19:09.411 slat (usec): min=18, max=14458, avg=1972.22, stdev=3469.68 00:19:09.411 clat (msec): min=9, max=300, avg=127.46, stdev=27.90 00:19:09.411 lat (msec): min=9, max=300, avg=129.43, stdev=28.12 00:19:09.411 clat percentiles (msec): 00:19:09.411 | 1.00th=[ 57], 5.00th=[ 103], 10.00th=[ 104], 20.00th=[ 109], 00:19:09.411 | 30.00th=[ 110], 40.00th=[ 111], 50.00th=[ 114], 60.00th=[ 121], 00:19:09.411 | 70.00th=[ 153], 80.00th=[ 159], 90.00th=[ 161], 95.00th=[ 163], 00:19:09.411 | 99.00th=[ 180], 99.50th=[ 243], 99.90th=[ 292], 99.95th=[ 292], 00:19:09.411 | 99.99th=[ 300] 00:19:09.411 bw ( KiB/s): min=100864, max=157696, per=7.98%, avg=126683.95, stdev=22895.70, samples=20 00:19:09.411 iops : min= 394, max= 616, avg=494.85, stdev=89.45, samples=20 00:19:09.411 lat (msec) : 10=0.02%, 20=0.08%, 50=0.72%, 100=1.82%, 250=96.93% 00:19:09.411 lat (msec) : 500=0.44% 00:19:09.411 cpu : usr=1.41%, sys=1.25%, ctx=4818, majf=0, minf=1 00:19:09.411 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:19:09.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:09.411 issued rwts: total=0,5012,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.412 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:09.412 job6: (groupid=0, jobs=1): err= 0: pid=91229: Sun Dec 15 19:39:55 2024 00:19:09.412 write: IOPS=552, BW=138MiB/s (145MB/s)(1397MiB/10109msec); 0 zone resets 00:19:09.412 slat (usec): min=26, max=9659, avg=1783.65, stdev=3045.11 00:19:09.412 clat (msec): min=4, max=236, avg=113.95, stdev=17.99 00:19:09.412 lat (msec): min=4, max=236, avg=115.73, stdev=18.01 00:19:09.412 clat percentiles (msec): 00:19:09.412 | 1.00th=[ 62], 5.00th=[ 77], 10.00th=[ 103], 20.00th=[ 108], 00:19:09.412 | 30.00th=[ 109], 40.00th=[ 110], 50.00th=[ 113], 60.00th=[ 121], 00:19:09.412 | 70.00th=[ 127], 80.00th=[ 129], 90.00th=[ 130], 95.00th=[ 131], 00:19:09.412 | 99.00th=[ 136], 99.50th=[ 180], 99.90th=[ 222], 99.95th=[ 228], 00:19:09.412 | 99.99th=[ 236] 00:19:09.412 bw ( KiB/s): min=123656, max=212480, per=8.91%, avg=141412.85, stdev=19878.28, samples=20 00:19:09.412 iops : min= 483, max= 830, avg=552.35, stdev=77.64, samples=20 00:19:09.412 lat (msec) : 10=0.20%, 20=0.14%, 50=0.50%, 100=7.05%, 250=92.11% 00:19:09.412 cpu : usr=1.81%, sys=1.60%, ctx=5656, majf=0, minf=1 00:19:09.412 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:09.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:09.412 issued rwts: total=0,5588,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.412 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:09.412 job7: (groupid=0, jobs=1): err= 0: pid=91230: Sun Dec 15 19:39:55 2024 00:19:09.412 write: IOPS=414, BW=104MiB/s (109MB/s)(1051MiB/10134msec); 0 zone resets 00:19:09.412 slat (usec): min=18, max=25556, avg=2375.07, stdev=4045.54 00:19:09.412 clat (msec): min=14, max=288, avg=151.81, stdev=17.20 00:19:09.412 lat (msec): min=14, max=288, avg=154.19, stdev=16.99 00:19:09.412 clat percentiles (msec): 00:19:09.412 | 1.00th=[ 106], 5.00th=[ 134], 10.00th=[ 136], 20.00th=[ 142], 00:19:09.412 | 30.00th=[ 144], 40.00th=[ 146], 50.00th=[ 150], 60.00th=[ 157], 00:19:09.412 | 70.00th=[ 163], 80.00th=[ 165], 90.00th=[ 167], 95.00th=[ 169], 00:19:09.412 | 99.00th=[ 176], 99.50th=[ 236], 99.90th=[ 279], 99.95th=[ 279], 00:19:09.412 | 99.99th=[ 288] 00:19:09.412 bw ( KiB/s): min=96256, max=115200, per=6.68%, avg=106035.95, stdev=7368.88, samples=20 00:19:09.412 iops : min= 376, max= 450, avg=414.15, stdev=28.80, samples=20 00:19:09.412 lat (msec) : 20=0.07%, 50=0.29%, 100=0.57%, 250=98.74%, 500=0.33% 00:19:09.412 cpu : usr=1.39%, sys=1.06%, ctx=6110, majf=0, minf=1 00:19:09.412 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:19:09.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:09.412 issued rwts: total=0,4205,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.412 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:09.412 job8: (groupid=0, jobs=1): err= 0: pid=91231: Sun Dec 15 19:39:55 2024 00:19:09.412 write: IOPS=507, BW=127MiB/s (133MB/s)(1288MiB/10154msec); 0 zone resets 00:19:09.412 slat (usec): min=23, max=22890, avg=1937.02, stdev=3425.38 00:19:09.412 clat (msec): min=8, max=305, avg=124.11, stdev=31.40 00:19:09.412 lat (msec): min=8, max=305, avg=126.04, stdev=31.67 00:19:09.412 clat percentiles (msec): 00:19:09.412 | 1.00th=[ 44], 5.00th=[ 73], 10.00th=[ 102], 20.00th=[ 107], 00:19:09.412 | 30.00th=[ 109], 40.00th=[ 110], 50.00th=[ 113], 60.00th=[ 120], 00:19:09.412 | 70.00th=[ 150], 80.00th=[ 159], 90.00th=[ 161], 95.00th=[ 163], 00:19:09.412 | 99.00th=[ 182], 99.50th=[ 247], 99.90th=[ 296], 99.95th=[ 296], 00:19:09.412 | 99.99th=[ 305] 00:19:09.412 bw ( KiB/s): min=102195, max=221627, per=8.21%, avg=130290.30, stdev=30707.58, samples=20 00:19:09.412 iops : min= 399, max= 865, avg=508.90, stdev=119.85, samples=20 00:19:09.412 lat (msec) : 10=0.08%, 20=0.16%, 50=0.91%, 100=8.64%, 250=89.79% 00:19:09.412 lat (msec) : 500=0.43% 00:19:09.412 cpu : usr=1.23%, sys=1.48%, ctx=4717, majf=0, minf=1 00:19:09.412 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:09.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:09.412 issued rwts: total=0,5152,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.412 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:09.412 job9: (groupid=0, jobs=1): err= 0: pid=91232: Sun Dec 15 19:39:55 2024 00:19:09.412 write: IOPS=542, BW=136MiB/s (142MB/s)(1372MiB/10106msec); 0 zone resets 00:19:09.412 slat (usec): min=20, max=9115, avg=1799.70, stdev=3084.83 00:19:09.412 clat (msec): min=7, max=230, avg=116.02, stdev=14.92 00:19:09.412 lat (msec): min=7, max=230, avg=117.82, stdev=14.89 00:19:09.412 clat percentiles (msec): 00:19:09.412 | 1.00th=[ 55], 5.00th=[ 103], 10.00th=[ 104], 20.00th=[ 109], 00:19:09.412 | 30.00th=[ 110], 40.00th=[ 111], 50.00th=[ 114], 60.00th=[ 121], 00:19:09.412 | 70.00th=[ 127], 80.00th=[ 129], 90.00th=[ 130], 95.00th=[ 131], 00:19:09.412 | 99.00th=[ 136], 99.50th=[ 176], 99.90th=[ 215], 99.95th=[ 224], 00:19:09.412 | 99.99th=[ 232] 00:19:09.412 bw ( KiB/s): min=124928, max=156160, per=8.75%, avg=138856.60, stdev=11509.88, samples=20 00:19:09.412 iops : min= 488, max= 610, avg=542.40, stdev=44.95, samples=20 00:19:09.412 lat (msec) : 10=0.02%, 20=0.07%, 50=0.77%, 100=1.64%, 250=97.50% 00:19:09.412 cpu : usr=1.66%, sys=1.44%, ctx=7359, majf=0, minf=1 00:19:09.412 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:09.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:09.412 issued rwts: total=0,5487,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.412 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:09.412 job10: (groupid=0, jobs=1): err= 0: pid=91233: Sun Dec 15 19:39:55 2024 00:19:09.412 write: IOPS=552, BW=138MiB/s (145MB/s)(1395MiB/10108msec); 0 zone resets 00:19:09.412 slat (usec): min=20, max=11110, avg=1787.31, stdev=3044.06 00:19:09.412 clat (msec): min=3, max=229, avg=114.10, stdev=17.20 00:19:09.412 lat (msec): min=4, max=229, avg=115.89, stdev=17.19 00:19:09.412 clat percentiles (msec): 00:19:09.412 | 1.00th=[ 68], 5.00th=[ 77], 10.00th=[ 103], 20.00th=[ 108], 00:19:09.412 | 30.00th=[ 109], 40.00th=[ 110], 50.00th=[ 113], 60.00th=[ 121], 00:19:09.412 | 70.00th=[ 127], 80.00th=[ 129], 90.00th=[ 130], 95.00th=[ 131], 00:19:09.412 | 99.00th=[ 136], 99.50th=[ 171], 99.90th=[ 213], 99.95th=[ 222], 00:19:09.412 | 99.99th=[ 230] 00:19:09.412 bw ( KiB/s): min=124928, max=207872, per=8.90%, avg=141222.50, stdev=18883.24, samples=20 00:19:09.412 iops : min= 488, max= 812, avg=551.65, stdev=73.76, samples=20 00:19:09.412 lat (msec) : 4=0.02%, 10=0.07%, 20=0.14%, 50=0.43%, 100=6.94% 00:19:09.412 lat (msec) : 250=92.40% 00:19:09.412 cpu : usr=1.77%, sys=1.64%, ctx=8371, majf=0, minf=1 00:19:09.412 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:09.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:09.412 issued rwts: total=0,5580,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.412 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:09.412 00:19:09.412 Run status group 0 (all jobs): 00:19:09.412 WRITE: bw=1550MiB/s (1626MB/s), 104MiB/s-352MiB/s (109MB/s-370MB/s), io=15.4GiB (16.5GB), run=10042-10154msec 00:19:09.412 00:19:09.412 Disk stats (read/write): 00:19:09.412 nvme0n1: ios=49/8318, merge=0/0, ticks=24/1212714, in_queue=1212738, util=97.74% 00:19:09.412 nvme10n1: ios=49/28125, merge=0/0, ticks=59/1215889, in_queue=1215948, util=97.93% 00:19:09.412 nvme1n1: ios=0/8315, merge=0/0, ticks=0/1211499, in_queue=1211499, util=97.81% 00:19:09.412 nvme2n1: ios=3/8272, merge=0/0, ticks=6/1211261, in_queue=1211267, util=97.98% 00:19:09.412 nvme3n1: ios=0/10121, merge=0/0, ticks=0/1210042, in_queue=1210042, util=98.00% 00:19:09.412 nvme4n1: ios=0/9889, merge=0/0, ticks=0/1211616, in_queue=1211616, util=98.21% 00:19:09.412 nvme5n1: ios=0/11041, merge=0/0, ticks=0/1213008, in_queue=1213008, util=98.39% 00:19:09.412 nvme6n1: ios=0/8262, merge=0/0, ticks=0/1210129, in_queue=1210129, util=98.33% 00:19:09.412 nvme7n1: ios=0/10164, merge=0/0, ticks=0/1211160, in_queue=1211160, util=98.69% 00:19:09.412 nvme8n1: ios=0/10823, merge=0/0, ticks=0/1212843, in_queue=1212843, util=98.73% 00:19:09.412 nvme9n1: ios=0/11007, merge=0/0, ticks=0/1212706, in_queue=1212706, util=98.83% 00:19:09.412 19:39:55 -- target/multiconnection.sh@36 -- # sync 00:19:09.412 19:39:55 -- target/multiconnection.sh@37 -- # seq 1 11 00:19:09.412 19:39:55 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:09.412 19:39:55 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:09.412 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:09.412 19:39:55 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:19:09.412 19:39:55 -- common/autotest_common.sh@1208 -- # local i=0 00:19:09.412 19:39:55 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:09.412 19:39:55 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:19:09.412 19:39:55 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:09.412 19:39:55 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:19:09.412 19:39:55 -- common/autotest_common.sh@1220 -- # return 0 00:19:09.412 19:39:55 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:09.412 19:39:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.412 19:39:55 -- common/autotest_common.sh@10 -- # set +x 00:19:09.412 19:39:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.412 19:39:55 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:09.412 19:39:55 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:19:09.412 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:19:09.412 19:39:55 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:19:09.412 19:39:55 -- common/autotest_common.sh@1208 -- # local i=0 00:19:09.412 19:39:55 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:19:09.412 19:39:55 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:09.412 19:39:55 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:09.412 19:39:55 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:19:09.412 19:39:55 -- common/autotest_common.sh@1220 -- # return 0 00:19:09.412 19:39:55 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:09.412 19:39:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.412 19:39:55 -- common/autotest_common.sh@10 -- # set +x 00:19:09.412 19:39:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.412 19:39:55 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:09.412 19:39:55 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:19:09.412 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:19:09.412 19:39:55 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:19:09.412 19:39:55 -- common/autotest_common.sh@1208 -- # local i=0 00:19:09.412 19:39:55 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:09.412 19:39:55 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:19:09.413 19:39:55 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:19:09.413 19:39:55 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:09.413 19:39:55 -- common/autotest_common.sh@1220 -- # return 0 00:19:09.413 19:39:55 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:09.413 19:39:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.413 19:39:55 -- common/autotest_common.sh@10 -- # set +x 00:19:09.413 19:39:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.413 19:39:55 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:09.413 19:39:55 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:19:09.413 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:19:09.413 19:39:55 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:19:09.413 19:39:55 -- common/autotest_common.sh@1208 -- # local i=0 00:19:09.413 19:39:55 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:09.413 19:39:55 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:19:09.413 19:39:55 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:09.413 19:39:55 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:19:09.413 19:39:55 -- common/autotest_common.sh@1220 -- # return 0 00:19:09.413 19:39:55 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:19:09.413 19:39:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.413 19:39:55 -- common/autotest_common.sh@10 -- # set +x 00:19:09.413 19:39:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.413 19:39:55 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:09.413 19:39:55 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:19:09.413 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:19:09.413 19:39:55 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:19:09.413 19:39:55 -- common/autotest_common.sh@1208 -- # local i=0 00:19:09.413 19:39:55 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:09.413 19:39:55 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:19:09.413 19:39:55 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:09.413 19:39:55 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:19:09.413 19:39:55 -- common/autotest_common.sh@1220 -- # return 0 00:19:09.413 19:39:55 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:19:09.413 19:39:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.413 19:39:55 -- common/autotest_common.sh@10 -- # set +x 00:19:09.413 19:39:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.413 19:39:55 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:09.413 19:39:55 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:19:09.413 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:19:09.413 19:39:55 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:19:09.413 19:39:55 -- common/autotest_common.sh@1208 -- # local i=0 00:19:09.413 19:39:55 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:09.413 19:39:55 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:19:09.413 19:39:55 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:09.413 19:39:55 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:19:09.413 19:39:55 -- common/autotest_common.sh@1220 -- # return 0 00:19:09.413 19:39:55 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:19:09.413 19:39:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.413 19:39:55 -- common/autotest_common.sh@10 -- # set +x 00:19:09.413 19:39:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.413 19:39:55 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:09.413 19:39:55 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:19:09.413 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:19:09.413 19:39:55 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:19:09.413 19:39:55 -- common/autotest_common.sh@1208 -- # local i=0 00:19:09.413 19:39:55 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:09.413 19:39:55 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:19:09.413 19:39:55 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:09.413 19:39:55 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:19:09.413 19:39:55 -- common/autotest_common.sh@1220 -- # return 0 00:19:09.413 19:39:55 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:19:09.413 19:39:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.413 19:39:55 -- common/autotest_common.sh@10 -- # set +x 00:19:09.413 19:39:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.413 19:39:55 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:09.413 19:39:55 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:19:09.413 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:19:09.413 19:39:55 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:19:09.413 19:39:55 -- common/autotest_common.sh@1208 -- # local i=0 00:19:09.413 19:39:55 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:09.413 19:39:55 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:19:09.413 19:39:55 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:09.413 19:39:55 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:19:09.413 19:39:55 -- common/autotest_common.sh@1220 -- # return 0 00:19:09.413 19:39:55 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:19:09.413 19:39:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.413 19:39:55 -- common/autotest_common.sh@10 -- # set +x 00:19:09.413 19:39:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.413 19:39:55 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:09.413 19:39:55 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:19:09.413 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:19:09.413 19:39:55 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:19:09.413 19:39:55 -- common/autotest_common.sh@1208 -- # local i=0 00:19:09.413 19:39:55 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:09.413 19:39:55 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:19:09.413 19:39:55 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:09.413 19:39:55 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:19:09.413 19:39:55 -- common/autotest_common.sh@1220 -- # return 0 00:19:09.413 19:39:55 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:19:09.413 19:39:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.413 19:39:55 -- common/autotest_common.sh@10 -- # set +x 00:19:09.413 19:39:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.413 19:39:55 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:09.413 19:39:55 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:19:09.413 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:19:09.413 19:39:56 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:19:09.413 19:39:56 -- common/autotest_common.sh@1208 -- # local i=0 00:19:09.413 19:39:56 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:09.413 19:39:56 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:19:09.413 19:39:56 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:09.413 19:39:56 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:19:09.413 19:39:56 -- common/autotest_common.sh@1220 -- # return 0 00:19:09.413 19:39:56 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:19:09.413 19:39:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.413 19:39:56 -- common/autotest_common.sh@10 -- # set +x 00:19:09.413 19:39:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.413 19:39:56 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:09.413 19:39:56 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:19:09.413 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:19:09.413 19:39:56 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:19:09.413 19:39:56 -- common/autotest_common.sh@1208 -- # local i=0 00:19:09.413 19:39:56 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:09.413 19:39:56 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:19:09.413 19:39:56 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:09.413 19:39:56 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:19:09.413 19:39:56 -- common/autotest_common.sh@1220 -- # return 0 00:19:09.413 19:39:56 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:19:09.413 19:39:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.413 19:39:56 -- common/autotest_common.sh@10 -- # set +x 00:19:09.413 19:39:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.413 19:39:56 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:19:09.413 19:39:56 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:19:09.413 19:39:56 -- target/multiconnection.sh@47 -- # nvmftestfini 00:19:09.413 19:39:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:09.413 19:39:56 -- nvmf/common.sh@116 -- # sync 00:19:09.413 19:39:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:09.413 19:39:56 -- nvmf/common.sh@119 -- # set +e 00:19:09.413 19:39:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:09.413 19:39:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:09.413 rmmod nvme_tcp 00:19:09.413 rmmod nvme_fabrics 00:19:09.413 rmmod nvme_keyring 00:19:09.413 19:39:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:09.413 19:39:56 -- nvmf/common.sh@123 -- # set -e 00:19:09.413 19:39:56 -- nvmf/common.sh@124 -- # return 0 00:19:09.413 19:39:56 -- nvmf/common.sh@477 -- # '[' -n 90518 ']' 00:19:09.413 19:39:56 -- nvmf/common.sh@478 -- # killprocess 90518 00:19:09.413 19:39:56 -- common/autotest_common.sh@936 -- # '[' -z 90518 ']' 00:19:09.413 19:39:56 -- common/autotest_common.sh@940 -- # kill -0 90518 00:19:09.413 19:39:56 -- common/autotest_common.sh@941 -- # uname 00:19:09.683 19:39:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:09.683 19:39:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90518 00:19:09.683 killing process with pid 90518 00:19:09.683 19:39:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:09.683 19:39:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:09.683 19:39:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90518' 00:19:09.683 19:39:56 -- common/autotest_common.sh@955 -- # kill 90518 00:19:09.683 19:39:56 -- common/autotest_common.sh@960 -- # wait 90518 00:19:10.265 19:39:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:10.265 19:39:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:10.265 19:39:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:10.265 19:39:56 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:10.265 19:39:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:10.265 19:39:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.265 19:39:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:10.265 19:39:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.265 19:39:57 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:10.265 00:19:10.265 real 0m50.094s 00:19:10.265 user 2m46.465s 00:19:10.265 sys 0m25.872s 00:19:10.265 19:39:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:10.265 ************************************ 00:19:10.265 END TEST nvmf_multiconnection 00:19:10.265 ************************************ 00:19:10.265 19:39:57 -- common/autotest_common.sh@10 -- # set +x 00:19:10.265 19:39:57 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:10.265 19:39:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:10.265 19:39:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:10.265 19:39:57 -- common/autotest_common.sh@10 -- # set +x 00:19:10.265 ************************************ 00:19:10.265 START TEST nvmf_initiator_timeout 00:19:10.265 ************************************ 00:19:10.265 19:39:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:10.265 * Looking for test storage... 00:19:10.265 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:10.265 19:39:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:10.265 19:39:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:10.265 19:39:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:10.524 19:39:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:10.524 19:39:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:10.524 19:39:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:10.524 19:39:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:10.524 19:39:57 -- scripts/common.sh@335 -- # IFS=.-: 00:19:10.524 19:39:57 -- scripts/common.sh@335 -- # read -ra ver1 00:19:10.524 19:39:57 -- scripts/common.sh@336 -- # IFS=.-: 00:19:10.524 19:39:57 -- scripts/common.sh@336 -- # read -ra ver2 00:19:10.524 19:39:57 -- scripts/common.sh@337 -- # local 'op=<' 00:19:10.524 19:39:57 -- scripts/common.sh@339 -- # ver1_l=2 00:19:10.525 19:39:57 -- scripts/common.sh@340 -- # ver2_l=1 00:19:10.525 19:39:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:10.525 19:39:57 -- scripts/common.sh@343 -- # case "$op" in 00:19:10.525 19:39:57 -- scripts/common.sh@344 -- # : 1 00:19:10.525 19:39:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:10.525 19:39:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:10.525 19:39:57 -- scripts/common.sh@364 -- # decimal 1 00:19:10.525 19:39:57 -- scripts/common.sh@352 -- # local d=1 00:19:10.525 19:39:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:10.525 19:39:57 -- scripts/common.sh@354 -- # echo 1 00:19:10.525 19:39:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:10.525 19:39:57 -- scripts/common.sh@365 -- # decimal 2 00:19:10.525 19:39:57 -- scripts/common.sh@352 -- # local d=2 00:19:10.525 19:39:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:10.525 19:39:57 -- scripts/common.sh@354 -- # echo 2 00:19:10.525 19:39:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:10.525 19:39:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:10.525 19:39:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:10.525 19:39:57 -- scripts/common.sh@367 -- # return 0 00:19:10.525 19:39:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:10.525 19:39:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:10.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.525 --rc genhtml_branch_coverage=1 00:19:10.525 --rc genhtml_function_coverage=1 00:19:10.525 --rc genhtml_legend=1 00:19:10.525 --rc geninfo_all_blocks=1 00:19:10.525 --rc geninfo_unexecuted_blocks=1 00:19:10.525 00:19:10.525 ' 00:19:10.525 19:39:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:10.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.525 --rc genhtml_branch_coverage=1 00:19:10.525 --rc genhtml_function_coverage=1 00:19:10.525 --rc genhtml_legend=1 00:19:10.525 --rc geninfo_all_blocks=1 00:19:10.525 --rc geninfo_unexecuted_blocks=1 00:19:10.525 00:19:10.525 ' 00:19:10.525 19:39:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:10.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.525 --rc genhtml_branch_coverage=1 00:19:10.525 --rc genhtml_function_coverage=1 00:19:10.525 --rc genhtml_legend=1 00:19:10.525 --rc geninfo_all_blocks=1 00:19:10.525 --rc geninfo_unexecuted_blocks=1 00:19:10.525 00:19:10.525 ' 00:19:10.525 19:39:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:10.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.525 --rc genhtml_branch_coverage=1 00:19:10.525 --rc genhtml_function_coverage=1 00:19:10.525 --rc genhtml_legend=1 00:19:10.525 --rc geninfo_all_blocks=1 00:19:10.525 --rc geninfo_unexecuted_blocks=1 00:19:10.525 00:19:10.525 ' 00:19:10.525 19:39:57 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:10.525 19:39:57 -- nvmf/common.sh@7 -- # uname -s 00:19:10.525 19:39:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:10.525 19:39:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:10.525 19:39:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:10.525 19:39:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:10.525 19:39:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:10.525 19:39:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:10.525 19:39:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:10.525 19:39:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:10.525 19:39:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:10.525 19:39:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:10.525 19:39:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:19:10.525 19:39:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:19:10.525 19:39:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:10.525 19:39:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:10.525 19:39:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:10.525 19:39:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:10.525 19:39:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:10.525 19:39:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:10.525 19:39:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:10.525 19:39:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.525 19:39:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.525 19:39:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.525 19:39:57 -- paths/export.sh@5 -- # export PATH 00:19:10.525 19:39:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.525 19:39:57 -- nvmf/common.sh@46 -- # : 0 00:19:10.525 19:39:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:10.525 19:39:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:10.525 19:39:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:10.525 19:39:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:10.525 19:39:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:10.525 19:39:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:10.525 19:39:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:10.525 19:39:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:10.525 19:39:57 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:10.525 19:39:57 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:10.525 19:39:57 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:19:10.525 19:39:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:10.525 19:39:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:10.525 19:39:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:10.525 19:39:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:10.525 19:39:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:10.525 19:39:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.525 19:39:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:10.525 19:39:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.525 19:39:57 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:10.525 19:39:57 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:10.525 19:39:57 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:10.525 19:39:57 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:10.525 19:39:57 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:10.525 19:39:57 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:10.525 19:39:57 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:10.525 19:39:57 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:10.525 19:39:57 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:10.525 19:39:57 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:10.525 19:39:57 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:10.525 19:39:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:10.525 19:39:57 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:10.525 19:39:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:10.525 19:39:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:10.525 19:39:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:10.525 19:39:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:10.525 19:39:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:10.525 19:39:57 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:10.525 19:39:57 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:10.525 Cannot find device "nvmf_tgt_br" 00:19:10.525 19:39:57 -- nvmf/common.sh@154 -- # true 00:19:10.525 19:39:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:10.525 Cannot find device "nvmf_tgt_br2" 00:19:10.525 19:39:57 -- nvmf/common.sh@155 -- # true 00:19:10.525 19:39:57 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:10.525 19:39:57 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:10.525 Cannot find device "nvmf_tgt_br" 00:19:10.525 19:39:57 -- nvmf/common.sh@157 -- # true 00:19:10.525 19:39:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:10.525 Cannot find device "nvmf_tgt_br2" 00:19:10.525 19:39:57 -- nvmf/common.sh@158 -- # true 00:19:10.525 19:39:57 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:10.525 19:39:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:10.525 19:39:57 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:10.785 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:10.785 19:39:57 -- nvmf/common.sh@161 -- # true 00:19:10.785 19:39:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:10.785 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:10.785 19:39:57 -- nvmf/common.sh@162 -- # true 00:19:10.785 19:39:57 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:10.785 19:39:57 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:10.785 19:39:57 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:10.785 19:39:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:10.785 19:39:57 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:10.785 19:39:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:10.785 19:39:57 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:10.785 19:39:57 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:10.785 19:39:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:10.785 19:39:57 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:10.785 19:39:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:10.785 19:39:57 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:10.785 19:39:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:10.785 19:39:57 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:10.785 19:39:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:10.785 19:39:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:10.785 19:39:57 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:10.785 19:39:57 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:10.785 19:39:57 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:10.785 19:39:57 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:10.785 19:39:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:10.785 19:39:57 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:10.785 19:39:57 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:10.785 19:39:57 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:10.785 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:10.785 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:19:10.785 00:19:10.785 --- 10.0.0.2 ping statistics --- 00:19:10.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.785 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:19:10.785 19:39:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:10.785 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:10.785 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:19:10.785 00:19:10.785 --- 10.0.0.3 ping statistics --- 00:19:10.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.785 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:19:10.785 19:39:57 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:10.785 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:10.785 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:19:10.785 00:19:10.785 --- 10.0.0.1 ping statistics --- 00:19:10.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.785 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:19:10.785 19:39:57 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:10.785 19:39:57 -- nvmf/common.sh@421 -- # return 0 00:19:10.785 19:39:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:10.785 19:39:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:10.785 19:39:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:10.785 19:39:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:10.785 19:39:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:10.785 19:39:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:10.785 19:39:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:10.785 19:39:57 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:19:10.785 19:39:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:10.785 19:39:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:10.785 19:39:57 -- common/autotest_common.sh@10 -- # set +x 00:19:10.785 19:39:57 -- nvmf/common.sh@469 -- # nvmfpid=91605 00:19:10.785 19:39:57 -- nvmf/common.sh@470 -- # waitforlisten 91605 00:19:10.785 19:39:57 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:10.785 19:39:57 -- common/autotest_common.sh@829 -- # '[' -z 91605 ']' 00:19:10.785 19:39:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.785 19:39:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:10.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.785 19:39:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.785 19:39:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:10.785 19:39:57 -- common/autotest_common.sh@10 -- # set +x 00:19:11.044 [2024-12-15 19:39:57.682845] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:11.044 [2024-12-15 19:39:57.682938] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:11.044 [2024-12-15 19:39:57.823599] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:11.044 [2024-12-15 19:39:57.911197] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:11.044 [2024-12-15 19:39:57.911756] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:11.044 [2024-12-15 19:39:57.911787] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:11.044 [2024-12-15 19:39:57.911799] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:11.044 [2024-12-15 19:39:57.911941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:11.044 [2024-12-15 19:39:57.912145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:11.044 [2024-12-15 19:39:57.912495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:11.044 [2024-12-15 19:39:57.912516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.981 19:39:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:11.981 19:39:58 -- common/autotest_common.sh@862 -- # return 0 00:19:11.981 19:39:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:11.981 19:39:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:11.981 19:39:58 -- common/autotest_common.sh@10 -- # set +x 00:19:11.981 19:39:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:11.981 19:39:58 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:11.981 19:39:58 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:11.981 19:39:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.981 19:39:58 -- common/autotest_common.sh@10 -- # set +x 00:19:11.981 Malloc0 00:19:11.981 19:39:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.981 19:39:58 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:19:11.981 19:39:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.981 19:39:58 -- common/autotest_common.sh@10 -- # set +x 00:19:11.981 Delay0 00:19:11.981 19:39:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.981 19:39:58 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:11.981 19:39:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.981 19:39:58 -- common/autotest_common.sh@10 -- # set +x 00:19:11.981 [2024-12-15 19:39:58.737075] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:11.981 19:39:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.981 19:39:58 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:11.981 19:39:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.981 19:39:58 -- common/autotest_common.sh@10 -- # set +x 00:19:11.981 19:39:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.981 19:39:58 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:11.981 19:39:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.981 19:39:58 -- common/autotest_common.sh@10 -- # set +x 00:19:11.981 19:39:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.981 19:39:58 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:11.981 19:39:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.981 19:39:58 -- common/autotest_common.sh@10 -- # set +x 00:19:11.981 [2024-12-15 19:39:58.765317] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:11.981 19:39:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.981 19:39:58 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 --hostid=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:12.240 19:39:58 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:19:12.240 19:39:58 -- common/autotest_common.sh@1187 -- # local i=0 00:19:12.240 19:39:58 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:12.240 19:39:58 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:12.240 19:39:58 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:14.144 19:40:00 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:14.144 19:40:00 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:14.144 19:40:00 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:19:14.144 19:40:00 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:14.144 19:40:00 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:14.144 19:40:00 -- common/autotest_common.sh@1197 -- # return 0 00:19:14.144 19:40:00 -- target/initiator_timeout.sh@35 -- # fio_pid=91693 00:19:14.144 19:40:00 -- target/initiator_timeout.sh@37 -- # sleep 3 00:19:14.144 19:40:00 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:19:14.144 [global] 00:19:14.144 thread=1 00:19:14.144 invalidate=1 00:19:14.144 rw=write 00:19:14.144 time_based=1 00:19:14.144 runtime=60 00:19:14.144 ioengine=libaio 00:19:14.144 direct=1 00:19:14.144 bs=4096 00:19:14.144 iodepth=1 00:19:14.144 norandommap=0 00:19:14.144 numjobs=1 00:19:14.144 00:19:14.144 verify_dump=1 00:19:14.144 verify_backlog=512 00:19:14.144 verify_state_save=0 00:19:14.144 do_verify=1 00:19:14.144 verify=crc32c-intel 00:19:14.144 [job0] 00:19:14.144 filename=/dev/nvme0n1 00:19:14.144 Could not set queue depth (nvme0n1) 00:19:14.403 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:14.403 fio-3.35 00:19:14.403 Starting 1 thread 00:19:17.691 19:40:03 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:19:17.691 19:40:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.691 19:40:03 -- common/autotest_common.sh@10 -- # set +x 00:19:17.691 true 00:19:17.691 19:40:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.691 19:40:03 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:19:17.691 19:40:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.691 19:40:03 -- common/autotest_common.sh@10 -- # set +x 00:19:17.691 true 00:19:17.691 19:40:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.691 19:40:03 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:19:17.691 19:40:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.691 19:40:03 -- common/autotest_common.sh@10 -- # set +x 00:19:17.691 true 00:19:17.691 19:40:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.691 19:40:03 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:19:17.691 19:40:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.691 19:40:03 -- common/autotest_common.sh@10 -- # set +x 00:19:17.691 true 00:19:17.691 19:40:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.691 19:40:04 -- target/initiator_timeout.sh@45 -- # sleep 3 00:19:20.225 19:40:07 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:19:20.225 19:40:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.225 19:40:07 -- common/autotest_common.sh@10 -- # set +x 00:19:20.225 true 00:19:20.225 19:40:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.225 19:40:07 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:19:20.225 19:40:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.225 19:40:07 -- common/autotest_common.sh@10 -- # set +x 00:19:20.225 true 00:19:20.225 19:40:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.225 19:40:07 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:19:20.225 19:40:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.225 19:40:07 -- common/autotest_common.sh@10 -- # set +x 00:19:20.225 true 00:19:20.225 19:40:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.225 19:40:07 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:19:20.225 19:40:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.225 19:40:07 -- common/autotest_common.sh@10 -- # set +x 00:19:20.225 true 00:19:20.225 19:40:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.225 19:40:07 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:19:20.225 19:40:07 -- target/initiator_timeout.sh@54 -- # wait 91693 00:20:16.458 00:20:16.458 job0: (groupid=0, jobs=1): err= 0: pid=91714: Sun Dec 15 19:41:01 2024 00:20:16.458 read: IOPS=819, BW=3277KiB/s (3355kB/s)(192MiB/60000msec) 00:20:16.458 slat (usec): min=11, max=10529, avg=15.04, stdev=56.72 00:20:16.458 clat (usec): min=112, max=40469k, avg=1022.03, stdev=182538.31 00:20:16.458 lat (usec): min=162, max=40469k, avg=1037.07, stdev=182538.31 00:20:16.458 clat percentiles (usec): 00:20:16.458 | 1.00th=[ 161], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 182], 00:20:16.458 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 202], 00:20:16.458 | 70.00th=[ 208], 80.00th=[ 215], 90.00th=[ 225], 95.00th=[ 233], 00:20:16.458 | 99.00th=[ 253], 99.50th=[ 262], 99.90th=[ 297], 99.95th=[ 326], 00:20:16.458 | 99.99th=[ 578] 00:20:16.458 write: IOPS=823, BW=3294KiB/s (3373kB/s)(193MiB/60000msec); 0 zone resets 00:20:16.458 slat (usec): min=17, max=1066, avg=22.03, stdev= 9.56 00:20:16.458 clat (usec): min=115, max=3128, avg=157.64, stdev=25.45 00:20:16.458 lat (usec): min=137, max=3156, avg=179.67, stdev=27.56 00:20:16.458 clat percentiles (usec): 00:20:16.458 | 1.00th=[ 128], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 143], 00:20:16.458 | 30.00th=[ 147], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 161], 00:20:16.458 | 70.00th=[ 165], 80.00th=[ 172], 90.00th=[ 180], 95.00th=[ 188], 00:20:16.458 | 99.00th=[ 206], 99.50th=[ 215], 99.90th=[ 253], 99.95th=[ 277], 00:20:16.458 | 99.99th=[ 644] 00:20:16.458 bw ( KiB/s): min= 4096, max=12288, per=100.00%, avg=9899.28, stdev=1846.56, samples=39 00:20:16.458 iops : min= 1024, max= 3072, avg=2474.82, stdev=461.64, samples=39 00:20:16.458 lat (usec) : 250=99.33%, 500=0.65%, 750=0.01%, 1000=0.01% 00:20:16.458 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:20:16.458 cpu : usr=0.54%, sys=2.23%, ctx=98636, majf=0, minf=5 00:20:16.458 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:16.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:16.458 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:16.458 issued rwts: total=49152,49415,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:16.458 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:16.458 00:20:16.458 Run status group 0 (all jobs): 00:20:16.458 READ: bw=3277KiB/s (3355kB/s), 3277KiB/s-3277KiB/s (3355kB/s-3355kB/s), io=192MiB (201MB), run=60000-60000msec 00:20:16.458 WRITE: bw=3294KiB/s (3373kB/s), 3294KiB/s-3294KiB/s (3373kB/s-3373kB/s), io=193MiB (202MB), run=60000-60000msec 00:20:16.458 00:20:16.458 Disk stats (read/write): 00:20:16.458 nvme0n1: ios=49169/49152, merge=0/0, ticks=10172/8297, in_queue=18469, util=99.81% 00:20:16.458 19:41:01 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:16.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:16.458 19:41:01 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:16.458 19:41:01 -- common/autotest_common.sh@1208 -- # local i=0 00:20:16.458 19:41:01 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:16.458 19:41:01 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:20:16.458 19:41:01 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:20:16.458 19:41:01 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:16.458 19:41:01 -- common/autotest_common.sh@1220 -- # return 0 00:20:16.458 19:41:01 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:20:16.458 nvmf hotplug test: fio successful as expected 00:20:16.458 19:41:01 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:20:16.458 19:41:01 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:16.459 19:41:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.459 19:41:01 -- common/autotest_common.sh@10 -- # set +x 00:20:16.459 19:41:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.459 19:41:01 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:20:16.459 19:41:01 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:20:16.459 19:41:01 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:20:16.459 19:41:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:16.459 19:41:01 -- nvmf/common.sh@116 -- # sync 00:20:16.459 19:41:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:16.459 19:41:01 -- nvmf/common.sh@119 -- # set +e 00:20:16.459 19:41:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:16.459 19:41:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:16.459 rmmod nvme_tcp 00:20:16.459 rmmod nvme_fabrics 00:20:16.459 rmmod nvme_keyring 00:20:16.459 19:41:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:16.459 19:41:01 -- nvmf/common.sh@123 -- # set -e 00:20:16.459 19:41:01 -- nvmf/common.sh@124 -- # return 0 00:20:16.459 19:41:01 -- nvmf/common.sh@477 -- # '[' -n 91605 ']' 00:20:16.459 19:41:01 -- nvmf/common.sh@478 -- # killprocess 91605 00:20:16.459 19:41:01 -- common/autotest_common.sh@936 -- # '[' -z 91605 ']' 00:20:16.459 19:41:01 -- common/autotest_common.sh@940 -- # kill -0 91605 00:20:16.459 19:41:01 -- common/autotest_common.sh@941 -- # uname 00:20:16.459 19:41:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:16.459 19:41:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91605 00:20:16.459 19:41:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:16.459 19:41:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:16.459 killing process with pid 91605 00:20:16.459 19:41:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91605' 00:20:16.459 19:41:01 -- common/autotest_common.sh@955 -- # kill 91605 00:20:16.459 19:41:01 -- common/autotest_common.sh@960 -- # wait 91605 00:20:16.459 19:41:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:16.459 19:41:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:16.459 19:41:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:16.459 19:41:01 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:16.459 19:41:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:16.459 19:41:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.459 19:41:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:16.459 19:41:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.459 19:41:01 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:16.459 00:20:16.459 real 1m4.733s 00:20:16.459 user 4m7.003s 00:20:16.459 sys 0m8.269s 00:20:16.459 19:41:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:16.459 ************************************ 00:20:16.459 END TEST nvmf_initiator_timeout 00:20:16.459 19:41:01 -- common/autotest_common.sh@10 -- # set +x 00:20:16.459 ************************************ 00:20:16.459 19:41:01 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:20:16.459 19:41:01 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:20:16.459 19:41:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:16.459 19:41:01 -- common/autotest_common.sh@10 -- # set +x 00:20:16.459 19:41:01 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:20:16.459 19:41:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:16.459 19:41:01 -- common/autotest_common.sh@10 -- # set +x 00:20:16.459 19:41:01 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:20:16.459 19:41:01 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:16.459 19:41:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:16.459 19:41:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:16.459 19:41:01 -- common/autotest_common.sh@10 -- # set +x 00:20:16.459 ************************************ 00:20:16.459 START TEST nvmf_multicontroller 00:20:16.459 ************************************ 00:20:16.459 19:41:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:16.459 * Looking for test storage... 00:20:16.459 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:16.459 19:41:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:16.459 19:41:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:16.459 19:41:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:16.459 19:41:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:16.459 19:41:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:16.459 19:41:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:16.459 19:41:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:16.459 19:41:02 -- scripts/common.sh@335 -- # IFS=.-: 00:20:16.459 19:41:02 -- scripts/common.sh@335 -- # read -ra ver1 00:20:16.459 19:41:02 -- scripts/common.sh@336 -- # IFS=.-: 00:20:16.459 19:41:02 -- scripts/common.sh@336 -- # read -ra ver2 00:20:16.459 19:41:02 -- scripts/common.sh@337 -- # local 'op=<' 00:20:16.459 19:41:02 -- scripts/common.sh@339 -- # ver1_l=2 00:20:16.459 19:41:02 -- scripts/common.sh@340 -- # ver2_l=1 00:20:16.459 19:41:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:16.459 19:41:02 -- scripts/common.sh@343 -- # case "$op" in 00:20:16.459 19:41:02 -- scripts/common.sh@344 -- # : 1 00:20:16.459 19:41:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:16.459 19:41:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:16.459 19:41:02 -- scripts/common.sh@364 -- # decimal 1 00:20:16.459 19:41:02 -- scripts/common.sh@352 -- # local d=1 00:20:16.459 19:41:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:16.459 19:41:02 -- scripts/common.sh@354 -- # echo 1 00:20:16.459 19:41:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:16.459 19:41:02 -- scripts/common.sh@365 -- # decimal 2 00:20:16.459 19:41:02 -- scripts/common.sh@352 -- # local d=2 00:20:16.459 19:41:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:16.459 19:41:02 -- scripts/common.sh@354 -- # echo 2 00:20:16.459 19:41:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:16.459 19:41:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:16.459 19:41:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:16.459 19:41:02 -- scripts/common.sh@367 -- # return 0 00:20:16.459 19:41:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:16.459 19:41:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:16.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.459 --rc genhtml_branch_coverage=1 00:20:16.459 --rc genhtml_function_coverage=1 00:20:16.459 --rc genhtml_legend=1 00:20:16.459 --rc geninfo_all_blocks=1 00:20:16.459 --rc geninfo_unexecuted_blocks=1 00:20:16.459 00:20:16.459 ' 00:20:16.459 19:41:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:16.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.459 --rc genhtml_branch_coverage=1 00:20:16.459 --rc genhtml_function_coverage=1 00:20:16.459 --rc genhtml_legend=1 00:20:16.459 --rc geninfo_all_blocks=1 00:20:16.459 --rc geninfo_unexecuted_blocks=1 00:20:16.459 00:20:16.459 ' 00:20:16.459 19:41:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:16.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.459 --rc genhtml_branch_coverage=1 00:20:16.459 --rc genhtml_function_coverage=1 00:20:16.459 --rc genhtml_legend=1 00:20:16.459 --rc geninfo_all_blocks=1 00:20:16.459 --rc geninfo_unexecuted_blocks=1 00:20:16.459 00:20:16.459 ' 00:20:16.459 19:41:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:16.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.459 --rc genhtml_branch_coverage=1 00:20:16.459 --rc genhtml_function_coverage=1 00:20:16.459 --rc genhtml_legend=1 00:20:16.459 --rc geninfo_all_blocks=1 00:20:16.459 --rc geninfo_unexecuted_blocks=1 00:20:16.459 00:20:16.459 ' 00:20:16.459 19:41:02 -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:16.459 19:41:02 -- nvmf/common.sh@7 -- # uname -s 00:20:16.459 19:41:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:16.459 19:41:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:16.459 19:41:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:16.459 19:41:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:16.459 19:41:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:16.459 19:41:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:16.459 19:41:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:16.459 19:41:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:16.459 19:41:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:16.459 19:41:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:16.459 19:41:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:20:16.459 19:41:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:20:16.459 19:41:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:16.459 19:41:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:16.459 19:41:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:16.459 19:41:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:16.459 19:41:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:16.459 19:41:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:16.459 19:41:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:16.460 19:41:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.460 19:41:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.460 19:41:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.460 19:41:02 -- paths/export.sh@5 -- # export PATH 00:20:16.460 19:41:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.460 19:41:02 -- nvmf/common.sh@46 -- # : 0 00:20:16.460 19:41:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:16.460 19:41:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:16.460 19:41:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:16.460 19:41:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:16.460 19:41:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:16.460 19:41:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:16.460 19:41:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:16.460 19:41:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:16.460 19:41:02 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:16.460 19:41:02 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:16.460 19:41:02 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:16.460 19:41:02 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:16.460 19:41:02 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:16.460 19:41:02 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:16.460 19:41:02 -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:16.460 19:41:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:16.460 19:41:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:16.460 19:41:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:16.460 19:41:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:16.460 19:41:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:16.460 19:41:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.460 19:41:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:16.460 19:41:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.460 19:41:02 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:16.460 19:41:02 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:16.460 19:41:02 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:16.460 19:41:02 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:16.460 19:41:02 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:16.460 19:41:02 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:16.460 19:41:02 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:16.460 19:41:02 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:16.460 19:41:02 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:16.460 19:41:02 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:16.460 19:41:02 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:16.460 19:41:02 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:16.460 19:41:02 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:16.460 19:41:02 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:16.460 19:41:02 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:16.460 19:41:02 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:16.460 19:41:02 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:16.460 19:41:02 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:16.460 19:41:02 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:16.460 19:41:02 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:16.460 Cannot find device "nvmf_tgt_br" 00:20:16.460 19:41:02 -- nvmf/common.sh@154 -- # true 00:20:16.460 19:41:02 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:16.460 Cannot find device "nvmf_tgt_br2" 00:20:16.460 19:41:02 -- nvmf/common.sh@155 -- # true 00:20:16.460 19:41:02 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:16.460 19:41:02 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:16.460 Cannot find device "nvmf_tgt_br" 00:20:16.460 19:41:02 -- nvmf/common.sh@157 -- # true 00:20:16.460 19:41:02 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:16.460 Cannot find device "nvmf_tgt_br2" 00:20:16.460 19:41:02 -- nvmf/common.sh@158 -- # true 00:20:16.460 19:41:02 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:16.460 19:41:02 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:16.460 19:41:02 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:16.460 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:16.460 19:41:02 -- nvmf/common.sh@161 -- # true 00:20:16.460 19:41:02 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:16.460 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:16.460 19:41:02 -- nvmf/common.sh@162 -- # true 00:20:16.460 19:41:02 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:16.460 19:41:02 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:16.460 19:41:02 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:16.460 19:41:02 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:16.460 19:41:02 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:16.460 19:41:02 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:16.460 19:41:02 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:16.460 19:41:02 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:16.460 19:41:02 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:16.460 19:41:02 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:16.460 19:41:02 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:16.460 19:41:02 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:16.460 19:41:02 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:16.460 19:41:02 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:16.460 19:41:02 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:16.460 19:41:02 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:16.460 19:41:02 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:16.460 19:41:02 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:16.460 19:41:02 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:16.460 19:41:02 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:16.460 19:41:02 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:16.460 19:41:02 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:16.460 19:41:02 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:16.460 19:41:02 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:16.460 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:16.460 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:20:16.460 00:20:16.460 --- 10.0.0.2 ping statistics --- 00:20:16.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.460 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:20:16.460 19:41:02 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:16.460 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:16.460 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:20:16.460 00:20:16.460 --- 10.0.0.3 ping statistics --- 00:20:16.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.460 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:20:16.460 19:41:02 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:16.460 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:16.460 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:20:16.460 00:20:16.460 --- 10.0.0.1 ping statistics --- 00:20:16.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.460 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:20:16.460 19:41:02 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:16.460 19:41:02 -- nvmf/common.sh@421 -- # return 0 00:20:16.460 19:41:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:16.460 19:41:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:16.460 19:41:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:16.460 19:41:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:16.460 19:41:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:16.460 19:41:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:16.460 19:41:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:16.460 19:41:02 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:16.460 19:41:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:16.460 19:41:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:16.460 19:41:02 -- common/autotest_common.sh@10 -- # set +x 00:20:16.461 19:41:02 -- nvmf/common.sh@469 -- # nvmfpid=92546 00:20:16.461 19:41:02 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:16.461 19:41:02 -- nvmf/common.sh@470 -- # waitforlisten 92546 00:20:16.461 19:41:02 -- common/autotest_common.sh@829 -- # '[' -z 92546 ']' 00:20:16.461 19:41:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.461 19:41:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:16.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.461 19:41:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.461 19:41:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:16.461 19:41:02 -- common/autotest_common.sh@10 -- # set +x 00:20:16.461 [2024-12-15 19:41:02.558256] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:16.461 [2024-12-15 19:41:02.558403] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:16.461 [2024-12-15 19:41:02.695111] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:16.461 [2024-12-15 19:41:02.782788] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:16.461 [2024-12-15 19:41:02.782950] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:16.461 [2024-12-15 19:41:02.782963] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:16.461 [2024-12-15 19:41:02.782971] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:16.461 [2024-12-15 19:41:02.784124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:16.461 [2024-12-15 19:41:02.784506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:16.461 [2024-12-15 19:41:02.784539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:16.720 19:41:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:16.720 19:41:03 -- common/autotest_common.sh@862 -- # return 0 00:20:16.720 19:41:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:16.720 19:41:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:16.720 19:41:03 -- common/autotest_common.sh@10 -- # set +x 00:20:16.720 19:41:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:16.720 19:41:03 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:16.720 19:41:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.720 19:41:03 -- common/autotest_common.sh@10 -- # set +x 00:20:16.720 [2024-12-15 19:41:03.584904] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:16.720 19:41:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.720 19:41:03 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:16.720 19:41:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.720 19:41:03 -- common/autotest_common.sh@10 -- # set +x 00:20:16.980 Malloc0 00:20:16.980 19:41:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.980 19:41:03 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:16.980 19:41:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.980 19:41:03 -- common/autotest_common.sh@10 -- # set +x 00:20:16.980 19:41:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.980 19:41:03 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:16.980 19:41:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.980 19:41:03 -- common/autotest_common.sh@10 -- # set +x 00:20:16.980 19:41:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.980 19:41:03 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:16.980 19:41:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.980 19:41:03 -- common/autotest_common.sh@10 -- # set +x 00:20:16.980 [2024-12-15 19:41:03.659129] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:16.980 19:41:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.980 19:41:03 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:16.980 19:41:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.980 19:41:03 -- common/autotest_common.sh@10 -- # set +x 00:20:16.980 [2024-12-15 19:41:03.666962] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:16.980 19:41:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.980 19:41:03 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:16.980 19:41:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.980 19:41:03 -- common/autotest_common.sh@10 -- # set +x 00:20:16.980 Malloc1 00:20:16.980 19:41:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.980 19:41:03 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:16.980 19:41:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.980 19:41:03 -- common/autotest_common.sh@10 -- # set +x 00:20:16.980 19:41:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.980 19:41:03 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:16.980 19:41:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.980 19:41:03 -- common/autotest_common.sh@10 -- # set +x 00:20:16.980 19:41:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.980 19:41:03 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:16.980 19:41:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.980 19:41:03 -- common/autotest_common.sh@10 -- # set +x 00:20:16.980 19:41:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.980 19:41:03 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:16.980 19:41:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.980 19:41:03 -- common/autotest_common.sh@10 -- # set +x 00:20:16.980 19:41:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.980 19:41:03 -- host/multicontroller.sh@44 -- # bdevperf_pid=92598 00:20:16.980 19:41:03 -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:16.980 19:41:03 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:16.980 19:41:03 -- host/multicontroller.sh@47 -- # waitforlisten 92598 /var/tmp/bdevperf.sock 00:20:16.980 19:41:03 -- common/autotest_common.sh@829 -- # '[' -z 92598 ']' 00:20:16.980 19:41:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:16.980 19:41:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:16.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:16.980 19:41:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:16.980 19:41:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:16.980 19:41:03 -- common/autotest_common.sh@10 -- # set +x 00:20:18.371 19:41:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:18.371 19:41:04 -- common/autotest_common.sh@862 -- # return 0 00:20:18.371 19:41:04 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:18.371 19:41:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.371 19:41:04 -- common/autotest_common.sh@10 -- # set +x 00:20:18.371 NVMe0n1 00:20:18.371 19:41:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.371 19:41:04 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:18.371 19:41:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.371 19:41:04 -- common/autotest_common.sh@10 -- # set +x 00:20:18.371 19:41:04 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:18.371 19:41:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.371 1 00:20:18.371 19:41:04 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:18.371 19:41:04 -- common/autotest_common.sh@650 -- # local es=0 00:20:18.371 19:41:04 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:18.371 19:41:04 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:18.371 19:41:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:18.371 19:41:04 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:18.371 19:41:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:18.371 19:41:04 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:18.371 19:41:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.371 19:41:04 -- common/autotest_common.sh@10 -- # set +x 00:20:18.371 2024/12/15 19:41:04 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:18.371 request: 00:20:18.372 { 00:20:18.372 "method": "bdev_nvme_attach_controller", 00:20:18.372 "params": { 00:20:18.372 "name": "NVMe0", 00:20:18.372 "trtype": "tcp", 00:20:18.372 "traddr": "10.0.0.2", 00:20:18.372 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:18.372 "hostaddr": "10.0.0.2", 00:20:18.372 "hostsvcid": "60000", 00:20:18.372 "adrfam": "ipv4", 00:20:18.372 "trsvcid": "4420", 00:20:18.372 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:20:18.372 } 00:20:18.372 } 00:20:18.372 Got JSON-RPC error response 00:20:18.372 GoRPCClient: error on JSON-RPC call 00:20:18.372 19:41:04 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:18.372 19:41:04 -- common/autotest_common.sh@653 -- # es=1 00:20:18.372 19:41:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:18.372 19:41:04 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:18.372 19:41:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:18.372 19:41:04 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:18.372 19:41:04 -- common/autotest_common.sh@650 -- # local es=0 00:20:18.372 19:41:04 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:18.372 19:41:04 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:18.372 19:41:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:18.372 19:41:04 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:18.372 19:41:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:18.372 19:41:04 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:18.372 19:41:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.372 19:41:04 -- common/autotest_common.sh@10 -- # set +x 00:20:18.372 2024/12/15 19:41:04 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:18.372 request: 00:20:18.372 { 00:20:18.372 "method": "bdev_nvme_attach_controller", 00:20:18.372 "params": { 00:20:18.372 "name": "NVMe0", 00:20:18.372 "trtype": "tcp", 00:20:18.372 "traddr": "10.0.0.2", 00:20:18.372 "hostaddr": "10.0.0.2", 00:20:18.372 "hostsvcid": "60000", 00:20:18.372 "adrfam": "ipv4", 00:20:18.372 "trsvcid": "4420", 00:20:18.372 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:20:18.372 } 00:20:18.372 } 00:20:18.372 Got JSON-RPC error response 00:20:18.372 GoRPCClient: error on JSON-RPC call 00:20:18.372 19:41:04 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:18.372 19:41:04 -- common/autotest_common.sh@653 -- # es=1 00:20:18.372 19:41:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:18.372 19:41:04 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:18.372 19:41:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:18.372 19:41:04 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:18.372 19:41:04 -- common/autotest_common.sh@650 -- # local es=0 00:20:18.372 19:41:04 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:18.372 19:41:04 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:18.372 19:41:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:18.372 19:41:04 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:18.372 19:41:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:18.372 19:41:04 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:18.372 19:41:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.372 19:41:04 -- common/autotest_common.sh@10 -- # set +x 00:20:18.372 2024/12/15 19:41:04 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:20:18.372 request: 00:20:18.372 { 00:20:18.372 "method": "bdev_nvme_attach_controller", 00:20:18.372 "params": { 00:20:18.372 "name": "NVMe0", 00:20:18.372 "trtype": "tcp", 00:20:18.372 "traddr": "10.0.0.2", 00:20:18.372 "hostaddr": "10.0.0.2", 00:20:18.372 "hostsvcid": "60000", 00:20:18.372 "adrfam": "ipv4", 00:20:18.372 "trsvcid": "4420", 00:20:18.372 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.372 "multipath": "disable" 00:20:18.372 } 00:20:18.372 } 00:20:18.372 Got JSON-RPC error response 00:20:18.372 GoRPCClient: error on JSON-RPC call 00:20:18.372 19:41:04 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:18.372 19:41:04 -- common/autotest_common.sh@653 -- # es=1 00:20:18.372 19:41:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:18.372 19:41:04 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:18.372 19:41:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:18.372 19:41:04 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:18.372 19:41:04 -- common/autotest_common.sh@650 -- # local es=0 00:20:18.372 19:41:04 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:18.372 19:41:04 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:18.372 19:41:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:18.372 19:41:04 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:18.372 19:41:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:18.372 19:41:04 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:18.372 19:41:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.372 19:41:04 -- common/autotest_common.sh@10 -- # set +x 00:20:18.372 2024/12/15 19:41:04 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:18.372 request: 00:20:18.372 { 00:20:18.372 "method": "bdev_nvme_attach_controller", 00:20:18.372 "params": { 00:20:18.372 "name": "NVMe0", 00:20:18.372 "trtype": "tcp", 00:20:18.372 "traddr": "10.0.0.2", 00:20:18.372 "hostaddr": "10.0.0.2", 00:20:18.372 "hostsvcid": "60000", 00:20:18.372 "adrfam": "ipv4", 00:20:18.372 "trsvcid": "4420", 00:20:18.372 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.372 "multipath": "failover" 00:20:18.372 } 00:20:18.372 } 00:20:18.372 Got JSON-RPC error response 00:20:18.372 GoRPCClient: error on JSON-RPC call 00:20:18.372 19:41:04 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:18.372 19:41:04 -- common/autotest_common.sh@653 -- # es=1 00:20:18.372 19:41:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:18.372 19:41:04 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:18.372 19:41:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:18.372 19:41:04 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:18.372 19:41:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.372 19:41:04 -- common/autotest_common.sh@10 -- # set +x 00:20:18.372 00:20:18.372 19:41:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.372 19:41:05 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:18.372 19:41:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.372 19:41:05 -- common/autotest_common.sh@10 -- # set +x 00:20:18.372 19:41:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.372 19:41:05 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:18.372 19:41:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.372 19:41:05 -- common/autotest_common.sh@10 -- # set +x 00:20:18.372 00:20:18.372 19:41:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.372 19:41:05 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:18.372 19:41:05 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:18.372 19:41:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.372 19:41:05 -- common/autotest_common.sh@10 -- # set +x 00:20:18.373 19:41:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.373 19:41:05 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:18.373 19:41:05 -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:19.775 0 00:20:19.775 19:41:06 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:19.775 19:41:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.775 19:41:06 -- common/autotest_common.sh@10 -- # set +x 00:20:19.775 19:41:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.775 19:41:06 -- host/multicontroller.sh@100 -- # killprocess 92598 00:20:19.775 19:41:06 -- common/autotest_common.sh@936 -- # '[' -z 92598 ']' 00:20:19.775 19:41:06 -- common/autotest_common.sh@940 -- # kill -0 92598 00:20:19.775 19:41:06 -- common/autotest_common.sh@941 -- # uname 00:20:19.775 19:41:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:19.775 19:41:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92598 00:20:19.775 19:41:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:19.775 19:41:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:19.775 19:41:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92598' 00:20:19.775 killing process with pid 92598 00:20:19.775 19:41:06 -- common/autotest_common.sh@955 -- # kill 92598 00:20:19.775 19:41:06 -- common/autotest_common.sh@960 -- # wait 92598 00:20:19.775 19:41:06 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:19.775 19:41:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.775 19:41:06 -- common/autotest_common.sh@10 -- # set +x 00:20:19.775 19:41:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.775 19:41:06 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:19.775 19:41:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.775 19:41:06 -- common/autotest_common.sh@10 -- # set +x 00:20:19.775 19:41:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.775 19:41:06 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:19.775 19:41:06 -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:19.775 19:41:06 -- common/autotest_common.sh@1607 -- # read -r file 00:20:19.775 19:41:06 -- common/autotest_common.sh@1606 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:20:19.775 19:41:06 -- common/autotest_common.sh@1606 -- # sort -u 00:20:19.775 19:41:06 -- common/autotest_common.sh@1608 -- # cat 00:20:19.775 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:20:19.775 [2024-12-15 19:41:03.797109] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:19.775 [2024-12-15 19:41:03.797240] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92598 ] 00:20:19.775 [2024-12-15 19:41:03.939297] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.775 [2024-12-15 19:41:04.027619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.775 [2024-12-15 19:41:05.121936] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 6a84c8db-2d30-4b65-adde-d4aa1ecb6125 already exists 00:20:19.775 [2024-12-15 19:41:05.121987] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:6a84c8db-2d30-4b65-adde-d4aa1ecb6125 alias for bdev NVMe1n1 00:20:19.775 [2024-12-15 19:41:05.122019] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:19.776 Running I/O for 1 seconds... 00:20:19.776 00:20:19.776 Latency(us) 00:20:19.776 [2024-12-15T19:41:06.672Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.776 [2024-12-15T19:41:06.672Z] Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:19.776 NVMe0n1 : 1.01 22358.20 87.34 0.00 0.00 5710.18 2040.55 10187.87 00:20:19.776 [2024-12-15T19:41:06.672Z] =================================================================================================================== 00:20:19.776 [2024-12-15T19:41:06.672Z] Total : 22358.20 87.34 0.00 0.00 5710.18 2040.55 10187.87 00:20:19.776 Received shutdown signal, test time was about 1.000000 seconds 00:20:19.776 00:20:19.776 Latency(us) 00:20:19.776 [2024-12-15T19:41:06.672Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.776 [2024-12-15T19:41:06.672Z] =================================================================================================================== 00:20:19.776 [2024-12-15T19:41:06.672Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:19.776 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:20:19.776 19:41:06 -- common/autotest_common.sh@1613 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:19.776 19:41:06 -- common/autotest_common.sh@1607 -- # read -r file 00:20:19.776 19:41:06 -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:19.776 19:41:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:19.776 19:41:06 -- nvmf/common.sh@116 -- # sync 00:20:20.035 19:41:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:20.035 19:41:06 -- nvmf/common.sh@119 -- # set +e 00:20:20.035 19:41:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:20.035 19:41:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:20.035 rmmod nvme_tcp 00:20:20.035 rmmod nvme_fabrics 00:20:20.035 rmmod nvme_keyring 00:20:20.035 19:41:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:20.035 19:41:06 -- nvmf/common.sh@123 -- # set -e 00:20:20.035 19:41:06 -- nvmf/common.sh@124 -- # return 0 00:20:20.035 19:41:06 -- nvmf/common.sh@477 -- # '[' -n 92546 ']' 00:20:20.035 19:41:06 -- nvmf/common.sh@478 -- # killprocess 92546 00:20:20.035 19:41:06 -- common/autotest_common.sh@936 -- # '[' -z 92546 ']' 00:20:20.035 19:41:06 -- common/autotest_common.sh@940 -- # kill -0 92546 00:20:20.035 19:41:06 -- common/autotest_common.sh@941 -- # uname 00:20:20.035 19:41:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:20.035 19:41:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92546 00:20:20.035 19:41:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:20.035 19:41:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:20.035 killing process with pid 92546 00:20:20.035 19:41:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92546' 00:20:20.035 19:41:06 -- common/autotest_common.sh@955 -- # kill 92546 00:20:20.035 19:41:06 -- common/autotest_common.sh@960 -- # wait 92546 00:20:20.295 19:41:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:20.295 19:41:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:20.295 19:41:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:20.295 19:41:07 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:20.295 19:41:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:20.295 19:41:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:20.295 19:41:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:20.295 19:41:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:20.295 19:41:07 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:20.295 00:20:20.295 real 0m5.265s 00:20:20.295 user 0m16.234s 00:20:20.295 sys 0m1.251s 00:20:20.295 19:41:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:20.295 19:41:07 -- common/autotest_common.sh@10 -- # set +x 00:20:20.295 ************************************ 00:20:20.295 END TEST nvmf_multicontroller 00:20:20.295 ************************************ 00:20:20.555 19:41:07 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:20.555 19:41:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:20.555 19:41:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:20.555 19:41:07 -- common/autotest_common.sh@10 -- # set +x 00:20:20.555 ************************************ 00:20:20.555 START TEST nvmf_aer 00:20:20.555 ************************************ 00:20:20.555 19:41:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:20.555 * Looking for test storage... 00:20:20.555 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:20.555 19:41:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:20.555 19:41:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:20.555 19:41:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:20.555 19:41:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:20.555 19:41:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:20.555 19:41:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:20.555 19:41:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:20.555 19:41:07 -- scripts/common.sh@335 -- # IFS=.-: 00:20:20.555 19:41:07 -- scripts/common.sh@335 -- # read -ra ver1 00:20:20.555 19:41:07 -- scripts/common.sh@336 -- # IFS=.-: 00:20:20.555 19:41:07 -- scripts/common.sh@336 -- # read -ra ver2 00:20:20.555 19:41:07 -- scripts/common.sh@337 -- # local 'op=<' 00:20:20.555 19:41:07 -- scripts/common.sh@339 -- # ver1_l=2 00:20:20.555 19:41:07 -- scripts/common.sh@340 -- # ver2_l=1 00:20:20.555 19:41:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:20.555 19:41:07 -- scripts/common.sh@343 -- # case "$op" in 00:20:20.555 19:41:07 -- scripts/common.sh@344 -- # : 1 00:20:20.555 19:41:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:20.555 19:41:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:20.555 19:41:07 -- scripts/common.sh@364 -- # decimal 1 00:20:20.555 19:41:07 -- scripts/common.sh@352 -- # local d=1 00:20:20.555 19:41:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:20.555 19:41:07 -- scripts/common.sh@354 -- # echo 1 00:20:20.555 19:41:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:20.555 19:41:07 -- scripts/common.sh@365 -- # decimal 2 00:20:20.555 19:41:07 -- scripts/common.sh@352 -- # local d=2 00:20:20.555 19:41:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:20.555 19:41:07 -- scripts/common.sh@354 -- # echo 2 00:20:20.555 19:41:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:20.555 19:41:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:20.555 19:41:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:20.555 19:41:07 -- scripts/common.sh@367 -- # return 0 00:20:20.555 19:41:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:20.555 19:41:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:20.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.555 --rc genhtml_branch_coverage=1 00:20:20.555 --rc genhtml_function_coverage=1 00:20:20.555 --rc genhtml_legend=1 00:20:20.555 --rc geninfo_all_blocks=1 00:20:20.555 --rc geninfo_unexecuted_blocks=1 00:20:20.555 00:20:20.555 ' 00:20:20.555 19:41:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:20.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.555 --rc genhtml_branch_coverage=1 00:20:20.555 --rc genhtml_function_coverage=1 00:20:20.555 --rc genhtml_legend=1 00:20:20.555 --rc geninfo_all_blocks=1 00:20:20.555 --rc geninfo_unexecuted_blocks=1 00:20:20.555 00:20:20.555 ' 00:20:20.555 19:41:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:20.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.555 --rc genhtml_branch_coverage=1 00:20:20.555 --rc genhtml_function_coverage=1 00:20:20.555 --rc genhtml_legend=1 00:20:20.555 --rc geninfo_all_blocks=1 00:20:20.555 --rc geninfo_unexecuted_blocks=1 00:20:20.555 00:20:20.555 ' 00:20:20.555 19:41:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:20.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.555 --rc genhtml_branch_coverage=1 00:20:20.555 --rc genhtml_function_coverage=1 00:20:20.555 --rc genhtml_legend=1 00:20:20.555 --rc geninfo_all_blocks=1 00:20:20.555 --rc geninfo_unexecuted_blocks=1 00:20:20.555 00:20:20.555 ' 00:20:20.555 19:41:07 -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:20.555 19:41:07 -- nvmf/common.sh@7 -- # uname -s 00:20:20.555 19:41:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:20.555 19:41:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:20.555 19:41:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:20.555 19:41:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:20.555 19:41:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:20.555 19:41:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:20.555 19:41:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:20.555 19:41:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:20.555 19:41:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:20.555 19:41:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:20.555 19:41:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:20:20.555 19:41:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:20:20.555 19:41:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:20.556 19:41:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:20.556 19:41:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:20.556 19:41:07 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:20.556 19:41:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:20.556 19:41:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:20.556 19:41:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:20.556 19:41:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.556 19:41:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.556 19:41:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.556 19:41:07 -- paths/export.sh@5 -- # export PATH 00:20:20.556 19:41:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.556 19:41:07 -- nvmf/common.sh@46 -- # : 0 00:20:20.556 19:41:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:20.556 19:41:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:20.556 19:41:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:20.556 19:41:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:20.556 19:41:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:20.556 19:41:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:20.556 19:41:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:20.556 19:41:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:20.556 19:41:07 -- host/aer.sh@11 -- # nvmftestinit 00:20:20.815 19:41:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:20.815 19:41:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:20.815 19:41:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:20.815 19:41:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:20.815 19:41:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:20.815 19:41:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:20.815 19:41:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:20.815 19:41:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:20.815 19:41:07 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:20.815 19:41:07 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:20.815 19:41:07 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:20.815 19:41:07 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:20.815 19:41:07 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:20.815 19:41:07 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:20.815 19:41:07 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:20.815 19:41:07 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:20.815 19:41:07 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:20.815 19:41:07 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:20.815 19:41:07 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:20.815 19:41:07 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:20.815 19:41:07 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:20.815 19:41:07 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:20.815 19:41:07 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:20.815 19:41:07 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:20.815 19:41:07 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:20.815 19:41:07 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:20.815 19:41:07 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:20.815 19:41:07 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:20.815 Cannot find device "nvmf_tgt_br" 00:20:20.815 19:41:07 -- nvmf/common.sh@154 -- # true 00:20:20.815 19:41:07 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:20.815 Cannot find device "nvmf_tgt_br2" 00:20:20.815 19:41:07 -- nvmf/common.sh@155 -- # true 00:20:20.815 19:41:07 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:20.816 19:41:07 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:20.816 Cannot find device "nvmf_tgt_br" 00:20:20.816 19:41:07 -- nvmf/common.sh@157 -- # true 00:20:20.816 19:41:07 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:20.816 Cannot find device "nvmf_tgt_br2" 00:20:20.816 19:41:07 -- nvmf/common.sh@158 -- # true 00:20:20.816 19:41:07 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:20.816 19:41:07 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:20.816 19:41:07 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:20.816 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:20.816 19:41:07 -- nvmf/common.sh@161 -- # true 00:20:20.816 19:41:07 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:20.816 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:20.816 19:41:07 -- nvmf/common.sh@162 -- # true 00:20:20.816 19:41:07 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:20.816 19:41:07 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:20.816 19:41:07 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:20.816 19:41:07 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:20.816 19:41:07 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:20.816 19:41:07 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:20.816 19:41:07 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:20.816 19:41:07 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:20.816 19:41:07 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:20.816 19:41:07 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:20.816 19:41:07 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:20.816 19:41:07 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:20.816 19:41:07 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:20.816 19:41:07 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:20.816 19:41:07 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:20.816 19:41:07 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:20.816 19:41:07 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:20.816 19:41:07 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:20.816 19:41:07 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:21.075 19:41:07 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:21.075 19:41:07 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:21.075 19:41:07 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:21.075 19:41:07 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:21.075 19:41:07 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:21.075 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:21.075 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:20:21.075 00:20:21.075 --- 10.0.0.2 ping statistics --- 00:20:21.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.075 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:20:21.075 19:41:07 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:21.075 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:21.075 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:20:21.075 00:20:21.075 --- 10.0.0.3 ping statistics --- 00:20:21.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.075 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:20:21.075 19:41:07 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:21.075 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:21.075 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:20:21.075 00:20:21.075 --- 10.0.0.1 ping statistics --- 00:20:21.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.075 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:20:21.075 19:41:07 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:21.075 19:41:07 -- nvmf/common.sh@421 -- # return 0 00:20:21.075 19:41:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:21.075 19:41:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:21.075 19:41:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:21.075 19:41:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:21.075 19:41:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:21.075 19:41:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:21.075 19:41:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:21.075 19:41:07 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:21.075 19:41:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:21.075 19:41:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:21.075 19:41:07 -- common/autotest_common.sh@10 -- # set +x 00:20:21.075 19:41:07 -- nvmf/common.sh@469 -- # nvmfpid=92864 00:20:21.075 19:41:07 -- nvmf/common.sh@470 -- # waitforlisten 92864 00:20:21.075 19:41:07 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:21.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.075 19:41:07 -- common/autotest_common.sh@829 -- # '[' -z 92864 ']' 00:20:21.075 19:41:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.075 19:41:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:21.075 19:41:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.075 19:41:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:21.075 19:41:07 -- common/autotest_common.sh@10 -- # set +x 00:20:21.075 [2024-12-15 19:41:07.838844] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:21.075 [2024-12-15 19:41:07.838912] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:21.076 [2024-12-15 19:41:07.965204] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:21.335 [2024-12-15 19:41:08.056060] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:21.335 [2024-12-15 19:41:08.056538] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:21.335 [2024-12-15 19:41:08.056590] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:21.335 [2024-12-15 19:41:08.056747] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:21.335 [2024-12-15 19:41:08.056923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:21.335 [2024-12-15 19:41:08.057461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:21.335 [2024-12-15 19:41:08.057495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.335 [2024-12-15 19:41:08.057028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:21.903 19:41:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:21.903 19:41:08 -- common/autotest_common.sh@862 -- # return 0 00:20:21.903 19:41:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:21.903 19:41:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:21.903 19:41:08 -- common/autotest_common.sh@10 -- # set +x 00:20:22.162 19:41:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:22.162 19:41:08 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:22.162 19:41:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.162 19:41:08 -- common/autotest_common.sh@10 -- # set +x 00:20:22.162 [2024-12-15 19:41:08.844055] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:22.162 19:41:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.163 19:41:08 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:22.163 19:41:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.163 19:41:08 -- common/autotest_common.sh@10 -- # set +x 00:20:22.163 Malloc0 00:20:22.163 19:41:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.163 19:41:08 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:22.163 19:41:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.163 19:41:08 -- common/autotest_common.sh@10 -- # set +x 00:20:22.163 19:41:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.163 19:41:08 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:22.163 19:41:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.163 19:41:08 -- common/autotest_common.sh@10 -- # set +x 00:20:22.163 19:41:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.163 19:41:08 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:22.163 19:41:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.163 19:41:08 -- common/autotest_common.sh@10 -- # set +x 00:20:22.163 [2024-12-15 19:41:08.926482] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:22.163 19:41:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.163 19:41:08 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:22.163 19:41:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.163 19:41:08 -- common/autotest_common.sh@10 -- # set +x 00:20:22.163 [2024-12-15 19:41:08.934202] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:22.163 [ 00:20:22.163 { 00:20:22.163 "allow_any_host": true, 00:20:22.163 "hosts": [], 00:20:22.163 "listen_addresses": [], 00:20:22.163 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:22.163 "subtype": "Discovery" 00:20:22.163 }, 00:20:22.163 { 00:20:22.163 "allow_any_host": true, 00:20:22.163 "hosts": [], 00:20:22.163 "listen_addresses": [ 00:20:22.163 { 00:20:22.163 "adrfam": "IPv4", 00:20:22.163 "traddr": "10.0.0.2", 00:20:22.163 "transport": "TCP", 00:20:22.163 "trsvcid": "4420", 00:20:22.163 "trtype": "TCP" 00:20:22.163 } 00:20:22.163 ], 00:20:22.163 "max_cntlid": 65519, 00:20:22.163 "max_namespaces": 2, 00:20:22.163 "min_cntlid": 1, 00:20:22.163 "model_number": "SPDK bdev Controller", 00:20:22.163 "namespaces": [ 00:20:22.163 { 00:20:22.163 "bdev_name": "Malloc0", 00:20:22.163 "name": "Malloc0", 00:20:22.163 "nguid": "665DA527BF914510A4F36C7018283721", 00:20:22.163 "nsid": 1, 00:20:22.163 "uuid": "665da527-bf91-4510-a4f3-6c7018283721" 00:20:22.163 } 00:20:22.163 ], 00:20:22.163 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:22.163 "serial_number": "SPDK00000000000001", 00:20:22.163 "subtype": "NVMe" 00:20:22.163 } 00:20:22.163 ] 00:20:22.163 19:41:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.163 19:41:08 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:22.163 19:41:08 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:22.163 19:41:08 -- host/aer.sh@33 -- # aerpid=92918 00:20:22.163 19:41:08 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:22.163 19:41:08 -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:22.163 19:41:08 -- common/autotest_common.sh@1254 -- # local i=0 00:20:22.163 19:41:08 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:22.163 19:41:08 -- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']' 00:20:22.163 19:41:08 -- common/autotest_common.sh@1257 -- # i=1 00:20:22.163 19:41:08 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:20:22.422 19:41:09 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:22.422 19:41:09 -- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']' 00:20:22.422 19:41:09 -- common/autotest_common.sh@1257 -- # i=2 00:20:22.422 19:41:09 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:20:22.422 19:41:09 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:22.422 19:41:09 -- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:22.422 19:41:09 -- common/autotest_common.sh@1265 -- # return 0 00:20:22.422 19:41:09 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:22.422 19:41:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.422 19:41:09 -- common/autotest_common.sh@10 -- # set +x 00:20:22.422 Malloc1 00:20:22.422 19:41:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.422 19:41:09 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:22.422 19:41:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.422 19:41:09 -- common/autotest_common.sh@10 -- # set +x 00:20:22.422 19:41:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.422 19:41:09 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:22.422 19:41:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.422 19:41:09 -- common/autotest_common.sh@10 -- # set +x 00:20:22.422 Asynchronous Event Request test 00:20:22.422 Attaching to 10.0.0.2 00:20:22.422 Attached to 10.0.0.2 00:20:22.422 Registering asynchronous event callbacks... 00:20:22.422 Starting namespace attribute notice tests for all controllers... 00:20:22.422 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:22.422 aer_cb - Changed Namespace 00:20:22.422 Cleaning up... 00:20:22.422 [ 00:20:22.422 { 00:20:22.422 "allow_any_host": true, 00:20:22.422 "hosts": [], 00:20:22.422 "listen_addresses": [], 00:20:22.422 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:22.422 "subtype": "Discovery" 00:20:22.422 }, 00:20:22.422 { 00:20:22.422 "allow_any_host": true, 00:20:22.422 "hosts": [], 00:20:22.422 "listen_addresses": [ 00:20:22.422 { 00:20:22.422 "adrfam": "IPv4", 00:20:22.422 "traddr": "10.0.0.2", 00:20:22.422 "transport": "TCP", 00:20:22.422 "trsvcid": "4420", 00:20:22.422 "trtype": "TCP" 00:20:22.422 } 00:20:22.422 ], 00:20:22.422 "max_cntlid": 65519, 00:20:22.422 "max_namespaces": 2, 00:20:22.422 "min_cntlid": 1, 00:20:22.422 "model_number": "SPDK bdev Controller", 00:20:22.422 "namespaces": [ 00:20:22.422 { 00:20:22.422 "bdev_name": "Malloc0", 00:20:22.422 "name": "Malloc0", 00:20:22.422 "nguid": "665DA527BF914510A4F36C7018283721", 00:20:22.422 "nsid": 1, 00:20:22.422 "uuid": "665da527-bf91-4510-a4f3-6c7018283721" 00:20:22.422 }, 00:20:22.422 { 00:20:22.422 "bdev_name": "Malloc1", 00:20:22.422 "name": "Malloc1", 00:20:22.422 "nguid": "5C928896B434427CABE295FD61C7968B", 00:20:22.422 "nsid": 2, 00:20:22.422 "uuid": "5c928896-b434-427c-abe2-95fd61c7968b" 00:20:22.422 } 00:20:22.422 ], 00:20:22.422 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:22.422 "serial_number": "SPDK00000000000001", 00:20:22.422 "subtype": "NVMe" 00:20:22.422 } 00:20:22.422 ] 00:20:22.422 19:41:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.422 19:41:09 -- host/aer.sh@43 -- # wait 92918 00:20:22.422 19:41:09 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:22.422 19:41:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.422 19:41:09 -- common/autotest_common.sh@10 -- # set +x 00:20:22.681 19:41:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.681 19:41:09 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:22.681 19:41:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.681 19:41:09 -- common/autotest_common.sh@10 -- # set +x 00:20:22.681 19:41:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.681 19:41:09 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:22.681 19:41:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.681 19:41:09 -- common/autotest_common.sh@10 -- # set +x 00:20:22.681 19:41:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.681 19:41:09 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:22.681 19:41:09 -- host/aer.sh@51 -- # nvmftestfini 00:20:22.681 19:41:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:22.681 19:41:09 -- nvmf/common.sh@116 -- # sync 00:20:22.681 19:41:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:22.681 19:41:09 -- nvmf/common.sh@119 -- # set +e 00:20:22.682 19:41:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:22.682 19:41:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:22.682 rmmod nvme_tcp 00:20:22.682 rmmod nvme_fabrics 00:20:22.682 rmmod nvme_keyring 00:20:22.682 19:41:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:22.682 19:41:09 -- nvmf/common.sh@123 -- # set -e 00:20:22.682 19:41:09 -- nvmf/common.sh@124 -- # return 0 00:20:22.682 19:41:09 -- nvmf/common.sh@477 -- # '[' -n 92864 ']' 00:20:22.682 19:41:09 -- nvmf/common.sh@478 -- # killprocess 92864 00:20:22.682 19:41:09 -- common/autotest_common.sh@936 -- # '[' -z 92864 ']' 00:20:22.682 19:41:09 -- common/autotest_common.sh@940 -- # kill -0 92864 00:20:22.682 19:41:09 -- common/autotest_common.sh@941 -- # uname 00:20:22.682 19:41:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:22.682 19:41:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92864 00:20:22.682 killing process with pid 92864 00:20:22.682 19:41:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:22.682 19:41:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:22.682 19:41:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92864' 00:20:22.682 19:41:09 -- common/autotest_common.sh@955 -- # kill 92864 00:20:22.682 [2024-12-15 19:41:09.526137] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:22.682 19:41:09 -- common/autotest_common.sh@960 -- # wait 92864 00:20:22.941 19:41:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:22.941 19:41:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:22.941 19:41:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:22.941 19:41:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:22.941 19:41:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:22.941 19:41:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.941 19:41:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:22.941 19:41:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.200 19:41:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:23.200 00:20:23.200 real 0m2.635s 00:20:23.200 user 0m7.121s 00:20:23.200 sys 0m0.751s 00:20:23.200 19:41:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:23.200 ************************************ 00:20:23.200 19:41:09 -- common/autotest_common.sh@10 -- # set +x 00:20:23.200 END TEST nvmf_aer 00:20:23.200 ************************************ 00:20:23.200 19:41:09 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:23.200 19:41:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:23.200 19:41:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:23.200 19:41:09 -- common/autotest_common.sh@10 -- # set +x 00:20:23.200 ************************************ 00:20:23.200 START TEST nvmf_async_init 00:20:23.200 ************************************ 00:20:23.200 19:41:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:23.200 * Looking for test storage... 00:20:23.200 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:23.200 19:41:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:23.200 19:41:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:23.200 19:41:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:23.200 19:41:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:23.200 19:41:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:23.200 19:41:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:23.200 19:41:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:23.200 19:41:10 -- scripts/common.sh@335 -- # IFS=.-: 00:20:23.200 19:41:10 -- scripts/common.sh@335 -- # read -ra ver1 00:20:23.200 19:41:10 -- scripts/common.sh@336 -- # IFS=.-: 00:20:23.200 19:41:10 -- scripts/common.sh@336 -- # read -ra ver2 00:20:23.200 19:41:10 -- scripts/common.sh@337 -- # local 'op=<' 00:20:23.200 19:41:10 -- scripts/common.sh@339 -- # ver1_l=2 00:20:23.200 19:41:10 -- scripts/common.sh@340 -- # ver2_l=1 00:20:23.200 19:41:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:23.200 19:41:10 -- scripts/common.sh@343 -- # case "$op" in 00:20:23.200 19:41:10 -- scripts/common.sh@344 -- # : 1 00:20:23.200 19:41:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:23.200 19:41:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:23.200 19:41:10 -- scripts/common.sh@364 -- # decimal 1 00:20:23.200 19:41:10 -- scripts/common.sh@352 -- # local d=1 00:20:23.200 19:41:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:23.200 19:41:10 -- scripts/common.sh@354 -- # echo 1 00:20:23.200 19:41:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:23.460 19:41:10 -- scripts/common.sh@365 -- # decimal 2 00:20:23.460 19:41:10 -- scripts/common.sh@352 -- # local d=2 00:20:23.460 19:41:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:23.460 19:41:10 -- scripts/common.sh@354 -- # echo 2 00:20:23.460 19:41:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:23.460 19:41:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:23.460 19:41:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:23.460 19:41:10 -- scripts/common.sh@367 -- # return 0 00:20:23.460 19:41:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:23.460 19:41:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:23.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.460 --rc genhtml_branch_coverage=1 00:20:23.460 --rc genhtml_function_coverage=1 00:20:23.460 --rc genhtml_legend=1 00:20:23.460 --rc geninfo_all_blocks=1 00:20:23.460 --rc geninfo_unexecuted_blocks=1 00:20:23.460 00:20:23.460 ' 00:20:23.460 19:41:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:23.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.460 --rc genhtml_branch_coverage=1 00:20:23.460 --rc genhtml_function_coverage=1 00:20:23.460 --rc genhtml_legend=1 00:20:23.460 --rc geninfo_all_blocks=1 00:20:23.460 --rc geninfo_unexecuted_blocks=1 00:20:23.460 00:20:23.460 ' 00:20:23.460 19:41:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:23.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.460 --rc genhtml_branch_coverage=1 00:20:23.460 --rc genhtml_function_coverage=1 00:20:23.460 --rc genhtml_legend=1 00:20:23.460 --rc geninfo_all_blocks=1 00:20:23.460 --rc geninfo_unexecuted_blocks=1 00:20:23.460 00:20:23.460 ' 00:20:23.460 19:41:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:23.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.460 --rc genhtml_branch_coverage=1 00:20:23.460 --rc genhtml_function_coverage=1 00:20:23.460 --rc genhtml_legend=1 00:20:23.460 --rc geninfo_all_blocks=1 00:20:23.460 --rc geninfo_unexecuted_blocks=1 00:20:23.460 00:20:23.460 ' 00:20:23.460 19:41:10 -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:23.460 19:41:10 -- nvmf/common.sh@7 -- # uname -s 00:20:23.460 19:41:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:23.460 19:41:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:23.460 19:41:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:23.460 19:41:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:23.460 19:41:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:23.460 19:41:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:23.460 19:41:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:23.460 19:41:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:23.460 19:41:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:23.460 19:41:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:23.460 19:41:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:20:23.460 19:41:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:20:23.460 19:41:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:23.460 19:41:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:23.460 19:41:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:23.460 19:41:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:23.460 19:41:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:23.460 19:41:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:23.460 19:41:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:23.460 19:41:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.460 19:41:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.460 19:41:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.460 19:41:10 -- paths/export.sh@5 -- # export PATH 00:20:23.460 19:41:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.460 19:41:10 -- nvmf/common.sh@46 -- # : 0 00:20:23.460 19:41:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:23.460 19:41:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:23.460 19:41:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:23.460 19:41:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:23.460 19:41:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:23.460 19:41:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:23.460 19:41:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:23.460 19:41:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:23.460 19:41:10 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:23.460 19:41:10 -- host/async_init.sh@14 -- # null_block_size=512 00:20:23.460 19:41:10 -- host/async_init.sh@15 -- # null_bdev=null0 00:20:23.460 19:41:10 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:23.461 19:41:10 -- host/async_init.sh@20 -- # uuidgen 00:20:23.461 19:41:10 -- host/async_init.sh@20 -- # tr -d - 00:20:23.461 19:41:10 -- host/async_init.sh@20 -- # nguid=c1078529b67143e588e70f619efe2f1c 00:20:23.461 19:41:10 -- host/async_init.sh@22 -- # nvmftestinit 00:20:23.461 19:41:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:23.461 19:41:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:23.461 19:41:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:23.461 19:41:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:23.461 19:41:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:23.461 19:41:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.461 19:41:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:23.461 19:41:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.461 19:41:10 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:23.461 19:41:10 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:23.461 19:41:10 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:23.461 19:41:10 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:23.461 19:41:10 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:23.461 19:41:10 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:23.461 19:41:10 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:23.461 19:41:10 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:23.461 19:41:10 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:23.461 19:41:10 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:23.461 19:41:10 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:23.461 19:41:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:23.461 19:41:10 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:23.461 19:41:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:23.461 19:41:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:23.461 19:41:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:23.461 19:41:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:23.461 19:41:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:23.461 19:41:10 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:23.461 19:41:10 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:23.461 Cannot find device "nvmf_tgt_br" 00:20:23.461 19:41:10 -- nvmf/common.sh@154 -- # true 00:20:23.461 19:41:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:23.461 Cannot find device "nvmf_tgt_br2" 00:20:23.461 19:41:10 -- nvmf/common.sh@155 -- # true 00:20:23.461 19:41:10 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:23.461 19:41:10 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:23.461 Cannot find device "nvmf_tgt_br" 00:20:23.461 19:41:10 -- nvmf/common.sh@157 -- # true 00:20:23.461 19:41:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:23.461 Cannot find device "nvmf_tgt_br2" 00:20:23.461 19:41:10 -- nvmf/common.sh@158 -- # true 00:20:23.461 19:41:10 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:23.461 19:41:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:23.461 19:41:10 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:23.461 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:23.461 19:41:10 -- nvmf/common.sh@161 -- # true 00:20:23.461 19:41:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:23.461 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:23.461 19:41:10 -- nvmf/common.sh@162 -- # true 00:20:23.461 19:41:10 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:23.461 19:41:10 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:23.461 19:41:10 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:23.461 19:41:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:23.461 19:41:10 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:23.461 19:41:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:23.461 19:41:10 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:23.719 19:41:10 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:23.719 19:41:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:23.719 19:41:10 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:23.719 19:41:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:23.719 19:41:10 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:23.719 19:41:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:23.719 19:41:10 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:23.719 19:41:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:23.719 19:41:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:23.719 19:41:10 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:23.719 19:41:10 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:23.719 19:41:10 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:23.719 19:41:10 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:23.719 19:41:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:23.719 19:41:10 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:23.719 19:41:10 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:23.719 19:41:10 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:23.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:23.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:20:23.719 00:20:23.720 --- 10.0.0.2 ping statistics --- 00:20:23.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.720 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:20:23.720 19:41:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:23.720 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:23.720 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:20:23.720 00:20:23.720 --- 10.0.0.3 ping statistics --- 00:20:23.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.720 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:20:23.720 19:41:10 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:23.720 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:23.720 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:20:23.720 00:20:23.720 --- 10.0.0.1 ping statistics --- 00:20:23.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.720 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:20:23.720 19:41:10 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:23.720 19:41:10 -- nvmf/common.sh@421 -- # return 0 00:20:23.720 19:41:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:23.720 19:41:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:23.720 19:41:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:23.720 19:41:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:23.720 19:41:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:23.720 19:41:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:23.720 19:41:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:23.720 19:41:10 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:23.720 19:41:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:23.720 19:41:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:23.720 19:41:10 -- common/autotest_common.sh@10 -- # set +x 00:20:23.720 19:41:10 -- nvmf/common.sh@469 -- # nvmfpid=93099 00:20:23.720 19:41:10 -- nvmf/common.sh@470 -- # waitforlisten 93099 00:20:23.720 19:41:10 -- common/autotest_common.sh@829 -- # '[' -z 93099 ']' 00:20:23.720 19:41:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:23.720 19:41:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:23.720 19:41:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:23.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:23.720 19:41:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:23.720 19:41:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:23.720 19:41:10 -- common/autotest_common.sh@10 -- # set +x 00:20:23.720 [2024-12-15 19:41:10.582075] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:23.720 [2024-12-15 19:41:10.582170] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:23.978 [2024-12-15 19:41:10.721849] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.978 [2024-12-15 19:41:10.807268] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:23.978 [2024-12-15 19:41:10.807458] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:23.978 [2024-12-15 19:41:10.807479] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:23.978 [2024-12-15 19:41:10.807493] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:23.978 [2024-12-15 19:41:10.807530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.915 19:41:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:24.915 19:41:11 -- common/autotest_common.sh@862 -- # return 0 00:20:24.915 19:41:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:24.915 19:41:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:24.915 19:41:11 -- common/autotest_common.sh@10 -- # set +x 00:20:24.915 19:41:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:24.915 19:41:11 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:24.915 19:41:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.915 19:41:11 -- common/autotest_common.sh@10 -- # set +x 00:20:24.915 [2024-12-15 19:41:11.675816] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:24.915 19:41:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.915 19:41:11 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:24.915 19:41:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.915 19:41:11 -- common/autotest_common.sh@10 -- # set +x 00:20:24.915 null0 00:20:24.915 19:41:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.915 19:41:11 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:24.915 19:41:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.915 19:41:11 -- common/autotest_common.sh@10 -- # set +x 00:20:24.915 19:41:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.915 19:41:11 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:24.915 19:41:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.915 19:41:11 -- common/autotest_common.sh@10 -- # set +x 00:20:24.915 19:41:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.915 19:41:11 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g c1078529b67143e588e70f619efe2f1c 00:20:24.915 19:41:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.915 19:41:11 -- common/autotest_common.sh@10 -- # set +x 00:20:24.915 19:41:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.915 19:41:11 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:24.915 19:41:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.915 19:41:11 -- common/autotest_common.sh@10 -- # set +x 00:20:24.915 [2024-12-15 19:41:11.716044] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:24.915 19:41:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.915 19:41:11 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:24.915 19:41:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.915 19:41:11 -- common/autotest_common.sh@10 -- # set +x 00:20:25.174 nvme0n1 00:20:25.174 19:41:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.174 19:41:11 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:25.174 19:41:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.174 19:41:11 -- common/autotest_common.sh@10 -- # set +x 00:20:25.174 [ 00:20:25.174 { 00:20:25.174 "aliases": [ 00:20:25.174 "c1078529-b671-43e5-88e7-0f619efe2f1c" 00:20:25.174 ], 00:20:25.174 "assigned_rate_limits": { 00:20:25.174 "r_mbytes_per_sec": 0, 00:20:25.174 "rw_ios_per_sec": 0, 00:20:25.174 "rw_mbytes_per_sec": 0, 00:20:25.174 "w_mbytes_per_sec": 0 00:20:25.174 }, 00:20:25.174 "block_size": 512, 00:20:25.174 "claimed": false, 00:20:25.174 "driver_specific": { 00:20:25.174 "mp_policy": "active_passive", 00:20:25.174 "nvme": [ 00:20:25.174 { 00:20:25.174 "ctrlr_data": { 00:20:25.174 "ana_reporting": false, 00:20:25.174 "cntlid": 1, 00:20:25.174 "firmware_revision": "24.01.1", 00:20:25.174 "model_number": "SPDK bdev Controller", 00:20:25.174 "multi_ctrlr": true, 00:20:25.174 "oacs": { 00:20:25.174 "firmware": 0, 00:20:25.174 "format": 0, 00:20:25.174 "ns_manage": 0, 00:20:25.174 "security": 0 00:20:25.174 }, 00:20:25.174 "serial_number": "00000000000000000000", 00:20:25.174 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:25.174 "vendor_id": "0x8086" 00:20:25.174 }, 00:20:25.174 "ns_data": { 00:20:25.174 "can_share": true, 00:20:25.174 "id": 1 00:20:25.174 }, 00:20:25.174 "trid": { 00:20:25.174 "adrfam": "IPv4", 00:20:25.174 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:25.174 "traddr": "10.0.0.2", 00:20:25.174 "trsvcid": "4420", 00:20:25.174 "trtype": "TCP" 00:20:25.174 }, 00:20:25.174 "vs": { 00:20:25.174 "nvme_version": "1.3" 00:20:25.174 } 00:20:25.174 } 00:20:25.174 ] 00:20:25.174 }, 00:20:25.174 "name": "nvme0n1", 00:20:25.174 "num_blocks": 2097152, 00:20:25.174 "product_name": "NVMe disk", 00:20:25.174 "supported_io_types": { 00:20:25.174 "abort": true, 00:20:25.174 "compare": true, 00:20:25.174 "compare_and_write": true, 00:20:25.174 "flush": true, 00:20:25.174 "nvme_admin": true, 00:20:25.174 "nvme_io": true, 00:20:25.174 "read": true, 00:20:25.174 "reset": true, 00:20:25.174 "unmap": false, 00:20:25.174 "write": true, 00:20:25.174 "write_zeroes": true 00:20:25.174 }, 00:20:25.174 "uuid": "c1078529-b671-43e5-88e7-0f619efe2f1c", 00:20:25.174 "zoned": false 00:20:25.174 } 00:20:25.174 ] 00:20:25.174 19:41:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.174 19:41:11 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:25.174 19:41:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.174 19:41:11 -- common/autotest_common.sh@10 -- # set +x 00:20:25.174 [2024-12-15 19:41:11.971967] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:25.174 [2024-12-15 19:41:11.972055] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16611c0 (9): Bad file descriptor 00:20:25.433 [2024-12-15 19:41:12.103988] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:25.433 19:41:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.433 19:41:12 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:25.433 19:41:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.433 19:41:12 -- common/autotest_common.sh@10 -- # set +x 00:20:25.433 [ 00:20:25.433 { 00:20:25.433 "aliases": [ 00:20:25.433 "c1078529-b671-43e5-88e7-0f619efe2f1c" 00:20:25.433 ], 00:20:25.433 "assigned_rate_limits": { 00:20:25.433 "r_mbytes_per_sec": 0, 00:20:25.433 "rw_ios_per_sec": 0, 00:20:25.433 "rw_mbytes_per_sec": 0, 00:20:25.433 "w_mbytes_per_sec": 0 00:20:25.433 }, 00:20:25.433 "block_size": 512, 00:20:25.433 "claimed": false, 00:20:25.433 "driver_specific": { 00:20:25.433 "mp_policy": "active_passive", 00:20:25.433 "nvme": [ 00:20:25.433 { 00:20:25.433 "ctrlr_data": { 00:20:25.433 "ana_reporting": false, 00:20:25.433 "cntlid": 2, 00:20:25.433 "firmware_revision": "24.01.1", 00:20:25.433 "model_number": "SPDK bdev Controller", 00:20:25.433 "multi_ctrlr": true, 00:20:25.433 "oacs": { 00:20:25.433 "firmware": 0, 00:20:25.433 "format": 0, 00:20:25.433 "ns_manage": 0, 00:20:25.433 "security": 0 00:20:25.433 }, 00:20:25.433 "serial_number": "00000000000000000000", 00:20:25.433 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:25.433 "vendor_id": "0x8086" 00:20:25.433 }, 00:20:25.433 "ns_data": { 00:20:25.433 "can_share": true, 00:20:25.433 "id": 1 00:20:25.433 }, 00:20:25.433 "trid": { 00:20:25.433 "adrfam": "IPv4", 00:20:25.433 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:25.433 "traddr": "10.0.0.2", 00:20:25.433 "trsvcid": "4420", 00:20:25.433 "trtype": "TCP" 00:20:25.433 }, 00:20:25.433 "vs": { 00:20:25.433 "nvme_version": "1.3" 00:20:25.433 } 00:20:25.433 } 00:20:25.433 ] 00:20:25.433 }, 00:20:25.433 "name": "nvme0n1", 00:20:25.433 "num_blocks": 2097152, 00:20:25.433 "product_name": "NVMe disk", 00:20:25.433 "supported_io_types": { 00:20:25.433 "abort": true, 00:20:25.433 "compare": true, 00:20:25.433 "compare_and_write": true, 00:20:25.433 "flush": true, 00:20:25.433 "nvme_admin": true, 00:20:25.433 "nvme_io": true, 00:20:25.433 "read": true, 00:20:25.433 "reset": true, 00:20:25.433 "unmap": false, 00:20:25.433 "write": true, 00:20:25.433 "write_zeroes": true 00:20:25.433 }, 00:20:25.433 "uuid": "c1078529-b671-43e5-88e7-0f619efe2f1c", 00:20:25.433 "zoned": false 00:20:25.433 } 00:20:25.433 ] 00:20:25.433 19:41:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.433 19:41:12 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:25.433 19:41:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.433 19:41:12 -- common/autotest_common.sh@10 -- # set +x 00:20:25.433 19:41:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.433 19:41:12 -- host/async_init.sh@53 -- # mktemp 00:20:25.433 19:41:12 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.bjbFEjTRqs 00:20:25.433 19:41:12 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:25.433 19:41:12 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.bjbFEjTRqs 00:20:25.433 19:41:12 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:25.433 19:41:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.433 19:41:12 -- common/autotest_common.sh@10 -- # set +x 00:20:25.433 19:41:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.433 19:41:12 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:25.433 19:41:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.433 19:41:12 -- common/autotest_common.sh@10 -- # set +x 00:20:25.433 [2024-12-15 19:41:12.176084] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:25.433 [2024-12-15 19:41:12.176208] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:25.433 19:41:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.433 19:41:12 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bjbFEjTRqs 00:20:25.433 19:41:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.433 19:41:12 -- common/autotest_common.sh@10 -- # set +x 00:20:25.433 19:41:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.433 19:41:12 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bjbFEjTRqs 00:20:25.433 19:41:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.433 19:41:12 -- common/autotest_common.sh@10 -- # set +x 00:20:25.433 [2024-12-15 19:41:12.192063] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:25.433 nvme0n1 00:20:25.433 19:41:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.433 19:41:12 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:25.433 19:41:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.433 19:41:12 -- common/autotest_common.sh@10 -- # set +x 00:20:25.433 [ 00:20:25.433 { 00:20:25.433 "aliases": [ 00:20:25.433 "c1078529-b671-43e5-88e7-0f619efe2f1c" 00:20:25.433 ], 00:20:25.433 "assigned_rate_limits": { 00:20:25.433 "r_mbytes_per_sec": 0, 00:20:25.433 "rw_ios_per_sec": 0, 00:20:25.433 "rw_mbytes_per_sec": 0, 00:20:25.433 "w_mbytes_per_sec": 0 00:20:25.433 }, 00:20:25.433 "block_size": 512, 00:20:25.433 "claimed": false, 00:20:25.433 "driver_specific": { 00:20:25.433 "mp_policy": "active_passive", 00:20:25.433 "nvme": [ 00:20:25.433 { 00:20:25.433 "ctrlr_data": { 00:20:25.433 "ana_reporting": false, 00:20:25.433 "cntlid": 3, 00:20:25.433 "firmware_revision": "24.01.1", 00:20:25.433 "model_number": "SPDK bdev Controller", 00:20:25.433 "multi_ctrlr": true, 00:20:25.433 "oacs": { 00:20:25.433 "firmware": 0, 00:20:25.433 "format": 0, 00:20:25.433 "ns_manage": 0, 00:20:25.433 "security": 0 00:20:25.433 }, 00:20:25.433 "serial_number": "00000000000000000000", 00:20:25.433 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:25.433 "vendor_id": "0x8086" 00:20:25.433 }, 00:20:25.433 "ns_data": { 00:20:25.433 "can_share": true, 00:20:25.433 "id": 1 00:20:25.433 }, 00:20:25.433 "trid": { 00:20:25.433 "adrfam": "IPv4", 00:20:25.433 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:25.433 "traddr": "10.0.0.2", 00:20:25.433 "trsvcid": "4421", 00:20:25.433 "trtype": "TCP" 00:20:25.433 }, 00:20:25.433 "vs": { 00:20:25.434 "nvme_version": "1.3" 00:20:25.434 } 00:20:25.434 } 00:20:25.434 ] 00:20:25.434 }, 00:20:25.434 "name": "nvme0n1", 00:20:25.434 "num_blocks": 2097152, 00:20:25.434 "product_name": "NVMe disk", 00:20:25.434 "supported_io_types": { 00:20:25.434 "abort": true, 00:20:25.434 "compare": true, 00:20:25.434 "compare_and_write": true, 00:20:25.434 "flush": true, 00:20:25.434 "nvme_admin": true, 00:20:25.434 "nvme_io": true, 00:20:25.434 "read": true, 00:20:25.434 "reset": true, 00:20:25.434 "unmap": false, 00:20:25.434 "write": true, 00:20:25.434 "write_zeroes": true 00:20:25.434 }, 00:20:25.434 "uuid": "c1078529-b671-43e5-88e7-0f619efe2f1c", 00:20:25.434 "zoned": false 00:20:25.434 } 00:20:25.434 ] 00:20:25.434 19:41:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.434 19:41:12 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:25.434 19:41:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.434 19:41:12 -- common/autotest_common.sh@10 -- # set +x 00:20:25.434 19:41:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.434 19:41:12 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.bjbFEjTRqs 00:20:25.434 19:41:12 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:25.434 19:41:12 -- host/async_init.sh@78 -- # nvmftestfini 00:20:25.434 19:41:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:25.434 19:41:12 -- nvmf/common.sh@116 -- # sync 00:20:25.693 19:41:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:25.693 19:41:12 -- nvmf/common.sh@119 -- # set +e 00:20:25.693 19:41:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:25.693 19:41:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:25.693 rmmod nvme_tcp 00:20:25.693 rmmod nvme_fabrics 00:20:25.693 rmmod nvme_keyring 00:20:25.693 19:41:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:25.693 19:41:12 -- nvmf/common.sh@123 -- # set -e 00:20:25.693 19:41:12 -- nvmf/common.sh@124 -- # return 0 00:20:25.693 19:41:12 -- nvmf/common.sh@477 -- # '[' -n 93099 ']' 00:20:25.693 19:41:12 -- nvmf/common.sh@478 -- # killprocess 93099 00:20:25.693 19:41:12 -- common/autotest_common.sh@936 -- # '[' -z 93099 ']' 00:20:25.693 19:41:12 -- common/autotest_common.sh@940 -- # kill -0 93099 00:20:25.693 19:41:12 -- common/autotest_common.sh@941 -- # uname 00:20:25.693 19:41:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:25.693 19:41:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93099 00:20:25.693 19:41:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:25.693 19:41:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:25.693 killing process with pid 93099 00:20:25.693 19:41:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93099' 00:20:25.693 19:41:12 -- common/autotest_common.sh@955 -- # kill 93099 00:20:25.693 19:41:12 -- common/autotest_common.sh@960 -- # wait 93099 00:20:25.952 19:41:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:25.952 19:41:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:25.952 19:41:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:25.952 19:41:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:25.952 19:41:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:25.952 19:41:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.952 19:41:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:25.952 19:41:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.952 19:41:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:25.952 00:20:25.952 real 0m2.831s 00:20:25.952 user 0m2.630s 00:20:25.952 sys 0m0.741s 00:20:25.952 19:41:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:25.952 ************************************ 00:20:25.952 19:41:12 -- common/autotest_common.sh@10 -- # set +x 00:20:25.952 END TEST nvmf_async_init 00:20:25.952 ************************************ 00:20:25.952 19:41:12 -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:25.952 19:41:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:25.952 19:41:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:25.952 19:41:12 -- common/autotest_common.sh@10 -- # set +x 00:20:25.952 ************************************ 00:20:25.952 START TEST dma 00:20:25.952 ************************************ 00:20:25.952 19:41:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:26.212 * Looking for test storage... 00:20:26.212 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:26.212 19:41:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:26.212 19:41:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:26.212 19:41:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:26.212 19:41:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:26.212 19:41:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:26.212 19:41:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:26.212 19:41:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:26.212 19:41:13 -- scripts/common.sh@335 -- # IFS=.-: 00:20:26.212 19:41:13 -- scripts/common.sh@335 -- # read -ra ver1 00:20:26.212 19:41:13 -- scripts/common.sh@336 -- # IFS=.-: 00:20:26.212 19:41:13 -- scripts/common.sh@336 -- # read -ra ver2 00:20:26.212 19:41:13 -- scripts/common.sh@337 -- # local 'op=<' 00:20:26.212 19:41:13 -- scripts/common.sh@339 -- # ver1_l=2 00:20:26.212 19:41:13 -- scripts/common.sh@340 -- # ver2_l=1 00:20:26.212 19:41:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:26.212 19:41:13 -- scripts/common.sh@343 -- # case "$op" in 00:20:26.212 19:41:13 -- scripts/common.sh@344 -- # : 1 00:20:26.212 19:41:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:26.212 19:41:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:26.212 19:41:13 -- scripts/common.sh@364 -- # decimal 1 00:20:26.212 19:41:13 -- scripts/common.sh@352 -- # local d=1 00:20:26.212 19:41:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:26.212 19:41:13 -- scripts/common.sh@354 -- # echo 1 00:20:26.212 19:41:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:26.212 19:41:13 -- scripts/common.sh@365 -- # decimal 2 00:20:26.212 19:41:13 -- scripts/common.sh@352 -- # local d=2 00:20:26.212 19:41:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:26.212 19:41:13 -- scripts/common.sh@354 -- # echo 2 00:20:26.212 19:41:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:26.212 19:41:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:26.212 19:41:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:26.212 19:41:13 -- scripts/common.sh@367 -- # return 0 00:20:26.212 19:41:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:26.212 19:41:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:26.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.212 --rc genhtml_branch_coverage=1 00:20:26.212 --rc genhtml_function_coverage=1 00:20:26.212 --rc genhtml_legend=1 00:20:26.212 --rc geninfo_all_blocks=1 00:20:26.212 --rc geninfo_unexecuted_blocks=1 00:20:26.212 00:20:26.212 ' 00:20:26.212 19:41:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:26.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.212 --rc genhtml_branch_coverage=1 00:20:26.212 --rc genhtml_function_coverage=1 00:20:26.212 --rc genhtml_legend=1 00:20:26.212 --rc geninfo_all_blocks=1 00:20:26.212 --rc geninfo_unexecuted_blocks=1 00:20:26.212 00:20:26.212 ' 00:20:26.212 19:41:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:26.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.212 --rc genhtml_branch_coverage=1 00:20:26.212 --rc genhtml_function_coverage=1 00:20:26.212 --rc genhtml_legend=1 00:20:26.212 --rc geninfo_all_blocks=1 00:20:26.212 --rc geninfo_unexecuted_blocks=1 00:20:26.212 00:20:26.212 ' 00:20:26.212 19:41:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:26.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.212 --rc genhtml_branch_coverage=1 00:20:26.212 --rc genhtml_function_coverage=1 00:20:26.212 --rc genhtml_legend=1 00:20:26.212 --rc geninfo_all_blocks=1 00:20:26.212 --rc geninfo_unexecuted_blocks=1 00:20:26.212 00:20:26.212 ' 00:20:26.212 19:41:13 -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:26.212 19:41:13 -- nvmf/common.sh@7 -- # uname -s 00:20:26.212 19:41:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:26.212 19:41:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:26.212 19:41:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:26.212 19:41:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:26.212 19:41:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:26.212 19:41:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:26.212 19:41:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:26.212 19:41:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:26.212 19:41:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:26.212 19:41:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:26.212 19:41:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:20:26.212 19:41:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:20:26.212 19:41:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:26.212 19:41:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:26.212 19:41:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:26.212 19:41:13 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:26.212 19:41:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:26.212 19:41:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:26.212 19:41:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:26.213 19:41:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.213 19:41:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.213 19:41:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.213 19:41:13 -- paths/export.sh@5 -- # export PATH 00:20:26.213 19:41:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.213 19:41:13 -- nvmf/common.sh@46 -- # : 0 00:20:26.213 19:41:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:26.213 19:41:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:26.213 19:41:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:26.213 19:41:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:26.213 19:41:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:26.213 19:41:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:26.213 19:41:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:26.213 19:41:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:26.213 19:41:13 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:26.213 19:41:13 -- host/dma.sh@13 -- # exit 0 00:20:26.213 00:20:26.213 real 0m0.261s 00:20:26.213 user 0m0.166s 00:20:26.213 sys 0m0.105s 00:20:26.213 19:41:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:26.213 19:41:13 -- common/autotest_common.sh@10 -- # set +x 00:20:26.213 ************************************ 00:20:26.213 END TEST dma 00:20:26.213 ************************************ 00:20:26.471 19:41:13 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:26.471 19:41:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:26.471 19:41:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:26.471 19:41:13 -- common/autotest_common.sh@10 -- # set +x 00:20:26.471 ************************************ 00:20:26.471 START TEST nvmf_identify 00:20:26.471 ************************************ 00:20:26.471 19:41:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:26.471 * Looking for test storage... 00:20:26.471 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:26.471 19:41:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:26.471 19:41:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:26.471 19:41:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:26.471 19:41:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:26.471 19:41:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:26.471 19:41:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:26.471 19:41:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:26.471 19:41:13 -- scripts/common.sh@335 -- # IFS=.-: 00:20:26.471 19:41:13 -- scripts/common.sh@335 -- # read -ra ver1 00:20:26.471 19:41:13 -- scripts/common.sh@336 -- # IFS=.-: 00:20:26.471 19:41:13 -- scripts/common.sh@336 -- # read -ra ver2 00:20:26.471 19:41:13 -- scripts/common.sh@337 -- # local 'op=<' 00:20:26.471 19:41:13 -- scripts/common.sh@339 -- # ver1_l=2 00:20:26.471 19:41:13 -- scripts/common.sh@340 -- # ver2_l=1 00:20:26.471 19:41:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:26.471 19:41:13 -- scripts/common.sh@343 -- # case "$op" in 00:20:26.471 19:41:13 -- scripts/common.sh@344 -- # : 1 00:20:26.471 19:41:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:26.471 19:41:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:26.471 19:41:13 -- scripts/common.sh@364 -- # decimal 1 00:20:26.472 19:41:13 -- scripts/common.sh@352 -- # local d=1 00:20:26.472 19:41:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:26.472 19:41:13 -- scripts/common.sh@354 -- # echo 1 00:20:26.472 19:41:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:26.472 19:41:13 -- scripts/common.sh@365 -- # decimal 2 00:20:26.472 19:41:13 -- scripts/common.sh@352 -- # local d=2 00:20:26.472 19:41:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:26.472 19:41:13 -- scripts/common.sh@354 -- # echo 2 00:20:26.472 19:41:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:26.472 19:41:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:26.472 19:41:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:26.472 19:41:13 -- scripts/common.sh@367 -- # return 0 00:20:26.472 19:41:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:26.472 19:41:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:26.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.472 --rc genhtml_branch_coverage=1 00:20:26.472 --rc genhtml_function_coverage=1 00:20:26.472 --rc genhtml_legend=1 00:20:26.472 --rc geninfo_all_blocks=1 00:20:26.472 --rc geninfo_unexecuted_blocks=1 00:20:26.472 00:20:26.472 ' 00:20:26.472 19:41:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:26.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.472 --rc genhtml_branch_coverage=1 00:20:26.472 --rc genhtml_function_coverage=1 00:20:26.472 --rc genhtml_legend=1 00:20:26.472 --rc geninfo_all_blocks=1 00:20:26.472 --rc geninfo_unexecuted_blocks=1 00:20:26.472 00:20:26.472 ' 00:20:26.472 19:41:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:26.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.472 --rc genhtml_branch_coverage=1 00:20:26.472 --rc genhtml_function_coverage=1 00:20:26.472 --rc genhtml_legend=1 00:20:26.472 --rc geninfo_all_blocks=1 00:20:26.472 --rc geninfo_unexecuted_blocks=1 00:20:26.472 00:20:26.472 ' 00:20:26.472 19:41:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:26.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.472 --rc genhtml_branch_coverage=1 00:20:26.472 --rc genhtml_function_coverage=1 00:20:26.472 --rc genhtml_legend=1 00:20:26.472 --rc geninfo_all_blocks=1 00:20:26.472 --rc geninfo_unexecuted_blocks=1 00:20:26.472 00:20:26.472 ' 00:20:26.472 19:41:13 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:26.472 19:41:13 -- nvmf/common.sh@7 -- # uname -s 00:20:26.472 19:41:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:26.472 19:41:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:26.472 19:41:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:26.472 19:41:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:26.472 19:41:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:26.472 19:41:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:26.472 19:41:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:26.472 19:41:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:26.472 19:41:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:26.472 19:41:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:26.472 19:41:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:20:26.472 19:41:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:20:26.472 19:41:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:26.472 19:41:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:26.472 19:41:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:26.472 19:41:13 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:26.472 19:41:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:26.472 19:41:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:26.472 19:41:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:26.472 19:41:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.472 19:41:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.472 19:41:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.472 19:41:13 -- paths/export.sh@5 -- # export PATH 00:20:26.472 19:41:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.472 19:41:13 -- nvmf/common.sh@46 -- # : 0 00:20:26.472 19:41:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:26.472 19:41:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:26.472 19:41:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:26.472 19:41:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:26.472 19:41:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:26.472 19:41:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:26.472 19:41:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:26.472 19:41:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:26.472 19:41:13 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:26.472 19:41:13 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:26.472 19:41:13 -- host/identify.sh@14 -- # nvmftestinit 00:20:26.472 19:41:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:26.472 19:41:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:26.472 19:41:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:26.472 19:41:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:26.472 19:41:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:26.472 19:41:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.472 19:41:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:26.472 19:41:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.472 19:41:13 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:26.472 19:41:13 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:26.472 19:41:13 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:26.472 19:41:13 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:26.472 19:41:13 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:26.472 19:41:13 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:26.472 19:41:13 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:26.472 19:41:13 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:26.472 19:41:13 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:26.472 19:41:13 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:26.472 19:41:13 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:26.472 19:41:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:26.472 19:41:13 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:26.472 19:41:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:26.472 19:41:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:26.472 19:41:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:26.472 19:41:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:26.472 19:41:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:26.472 19:41:13 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:26.472 19:41:13 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:26.472 Cannot find device "nvmf_tgt_br" 00:20:26.472 19:41:13 -- nvmf/common.sh@154 -- # true 00:20:26.472 19:41:13 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:26.472 Cannot find device "nvmf_tgt_br2" 00:20:26.472 19:41:13 -- nvmf/common.sh@155 -- # true 00:20:26.472 19:41:13 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:26.472 19:41:13 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:26.472 Cannot find device "nvmf_tgt_br" 00:20:26.472 19:41:13 -- nvmf/common.sh@157 -- # true 00:20:26.472 19:41:13 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:26.731 Cannot find device "nvmf_tgt_br2" 00:20:26.731 19:41:13 -- nvmf/common.sh@158 -- # true 00:20:26.731 19:41:13 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:26.731 19:41:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:26.731 19:41:13 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:26.731 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:26.731 19:41:13 -- nvmf/common.sh@161 -- # true 00:20:26.731 19:41:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:26.731 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:26.731 19:41:13 -- nvmf/common.sh@162 -- # true 00:20:26.731 19:41:13 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:26.731 19:41:13 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:26.731 19:41:13 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:26.731 19:41:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:26.731 19:41:13 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:26.731 19:41:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:26.731 19:41:13 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:26.731 19:41:13 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:26.731 19:41:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:26.731 19:41:13 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:26.731 19:41:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:26.731 19:41:13 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:26.731 19:41:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:26.731 19:41:13 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:26.731 19:41:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:26.731 19:41:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:26.731 19:41:13 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:26.731 19:41:13 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:26.731 19:41:13 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:26.731 19:41:13 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:26.990 19:41:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:26.990 19:41:13 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:26.990 19:41:13 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:26.990 19:41:13 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:26.990 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:26.990 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:20:26.990 00:20:26.990 --- 10.0.0.2 ping statistics --- 00:20:26.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.990 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:20:26.990 19:41:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:26.990 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:26.990 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:20:26.990 00:20:26.990 --- 10.0.0.3 ping statistics --- 00:20:26.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.990 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:20:26.990 19:41:13 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:26.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:26.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:20:26.990 00:20:26.990 --- 10.0.0.1 ping statistics --- 00:20:26.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.990 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:20:26.990 19:41:13 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:26.990 19:41:13 -- nvmf/common.sh@421 -- # return 0 00:20:26.990 19:41:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:26.990 19:41:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:26.990 19:41:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:26.990 19:41:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:26.990 19:41:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:26.990 19:41:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:26.990 19:41:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:26.990 19:41:13 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:26.990 19:41:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:26.990 19:41:13 -- common/autotest_common.sh@10 -- # set +x 00:20:26.990 19:41:13 -- host/identify.sh@19 -- # nvmfpid=93386 00:20:26.990 19:41:13 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:26.990 19:41:13 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:26.990 19:41:13 -- host/identify.sh@23 -- # waitforlisten 93386 00:20:26.990 19:41:13 -- common/autotest_common.sh@829 -- # '[' -z 93386 ']' 00:20:26.990 19:41:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.990 19:41:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:26.990 19:41:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.990 19:41:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:26.990 19:41:13 -- common/autotest_common.sh@10 -- # set +x 00:20:26.990 [2024-12-15 19:41:13.762014] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:26.990 [2024-12-15 19:41:13.762112] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:27.249 [2024-12-15 19:41:13.901331] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:27.249 [2024-12-15 19:41:14.003982] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:27.249 [2024-12-15 19:41:14.004180] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:27.249 [2024-12-15 19:41:14.004196] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:27.249 [2024-12-15 19:41:14.004207] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:27.249 [2024-12-15 19:41:14.004371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:27.249 [2024-12-15 19:41:14.004944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:27.249 [2024-12-15 19:41:14.005225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:27.249 [2024-12-15 19:41:14.005233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.185 19:41:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:28.185 19:41:14 -- common/autotest_common.sh@862 -- # return 0 00:20:28.185 19:41:14 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:28.185 19:41:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.185 19:41:14 -- common/autotest_common.sh@10 -- # set +x 00:20:28.185 [2024-12-15 19:41:14.790035] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:28.185 19:41:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.185 19:41:14 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:28.185 19:41:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:28.185 19:41:14 -- common/autotest_common.sh@10 -- # set +x 00:20:28.185 19:41:14 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:28.185 19:41:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.185 19:41:14 -- common/autotest_common.sh@10 -- # set +x 00:20:28.185 Malloc0 00:20:28.185 19:41:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.185 19:41:14 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:28.185 19:41:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.185 19:41:14 -- common/autotest_common.sh@10 -- # set +x 00:20:28.185 19:41:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.185 19:41:14 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:28.185 19:41:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.185 19:41:14 -- common/autotest_common.sh@10 -- # set +x 00:20:28.185 19:41:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.185 19:41:14 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:28.185 19:41:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.185 19:41:14 -- common/autotest_common.sh@10 -- # set +x 00:20:28.185 [2024-12-15 19:41:14.911329] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:28.185 19:41:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.185 19:41:14 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:28.185 19:41:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.185 19:41:14 -- common/autotest_common.sh@10 -- # set +x 00:20:28.185 19:41:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.185 19:41:14 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:28.185 19:41:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.185 19:41:14 -- common/autotest_common.sh@10 -- # set +x 00:20:28.185 [2024-12-15 19:41:14.926956] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:28.185 [ 00:20:28.185 { 00:20:28.185 "allow_any_host": true, 00:20:28.185 "hosts": [], 00:20:28.185 "listen_addresses": [ 00:20:28.185 { 00:20:28.185 "adrfam": "IPv4", 00:20:28.185 "traddr": "10.0.0.2", 00:20:28.185 "transport": "TCP", 00:20:28.185 "trsvcid": "4420", 00:20:28.185 "trtype": "TCP" 00:20:28.185 } 00:20:28.185 ], 00:20:28.185 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:28.185 "subtype": "Discovery" 00:20:28.185 }, 00:20:28.185 { 00:20:28.185 "allow_any_host": true, 00:20:28.185 "hosts": [], 00:20:28.185 "listen_addresses": [ 00:20:28.185 { 00:20:28.185 "adrfam": "IPv4", 00:20:28.185 "traddr": "10.0.0.2", 00:20:28.185 "transport": "TCP", 00:20:28.185 "trsvcid": "4420", 00:20:28.185 "trtype": "TCP" 00:20:28.185 } 00:20:28.185 ], 00:20:28.185 "max_cntlid": 65519, 00:20:28.185 "max_namespaces": 32, 00:20:28.185 "min_cntlid": 1, 00:20:28.185 "model_number": "SPDK bdev Controller", 00:20:28.185 "namespaces": [ 00:20:28.185 { 00:20:28.185 "bdev_name": "Malloc0", 00:20:28.185 "eui64": "ABCDEF0123456789", 00:20:28.185 "name": "Malloc0", 00:20:28.185 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:28.185 "nsid": 1, 00:20:28.185 "uuid": "ec4fe6be-c3af-49ea-962f-c95daf929232" 00:20:28.185 } 00:20:28.185 ], 00:20:28.185 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.185 "serial_number": "SPDK00000000000001", 00:20:28.185 "subtype": "NVMe" 00:20:28.185 } 00:20:28.185 ] 00:20:28.185 19:41:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.185 19:41:14 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:28.185 [2024-12-15 19:41:14.962953] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:28.185 [2024-12-15 19:41:14.963018] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93439 ] 00:20:28.446 [2024-12-15 19:41:15.097848] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:28.446 [2024-12-15 19:41:15.097923] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:28.446 [2024-12-15 19:41:15.097930] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:28.446 [2024-12-15 19:41:15.097939] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:28.446 [2024-12-15 19:41:15.097949] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:28.446 [2024-12-15 19:41:15.098117] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:28.446 [2024-12-15 19:41:15.098212] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xfed540 0 00:20:28.446 [2024-12-15 19:41:15.102849] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:28.446 [2024-12-15 19:41:15.102901] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:28.446 [2024-12-15 19:41:15.102907] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:28.446 [2024-12-15 19:41:15.102911] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:28.446 [2024-12-15 19:41:15.102959] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.446 [2024-12-15 19:41:15.102966] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.446 [2024-12-15 19:41:15.102970] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfed540) 00:20:28.446 [2024-12-15 19:41:15.102984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:28.446 [2024-12-15 19:41:15.103024] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1026220, cid 0, qid 0 00:20:28.446 [2024-12-15 19:41:15.110848] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.446 [2024-12-15 19:41:15.110869] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.446 [2024-12-15 19:41:15.110889] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.446 [2024-12-15 19:41:15.110894] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1026220) on tqpair=0xfed540 00:20:28.446 [2024-12-15 19:41:15.110906] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:28.446 [2024-12-15 19:41:15.110912] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:28.446 [2024-12-15 19:41:15.110918] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:28.446 [2024-12-15 19:41:15.110944] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.446 [2024-12-15 19:41:15.110949] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.446 [2024-12-15 19:41:15.110953] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfed540) 00:20:28.446 [2024-12-15 19:41:15.110962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.446 [2024-12-15 19:41:15.110999] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1026220, cid 0, qid 0 00:20:28.446 [2024-12-15 19:41:15.111082] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.446 [2024-12-15 19:41:15.111088] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.446 [2024-12-15 19:41:15.111092] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.446 [2024-12-15 19:41:15.111095] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1026220) on tqpair=0xfed540 00:20:28.446 [2024-12-15 19:41:15.111108] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:28.446 [2024-12-15 19:41:15.111115] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:28.446 [2024-12-15 19:41:15.111122] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.446 [2024-12-15 19:41:15.111126] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.446 [2024-12-15 19:41:15.111129] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfed540) 00:20:28.446 [2024-12-15 19:41:15.111136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.446 [2024-12-15 19:41:15.111170] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1026220, cid 0, qid 0 00:20:28.446 [2024-12-15 19:41:15.111248] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.447 [2024-12-15 19:41:15.111255] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.447 [2024-12-15 19:41:15.111258] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.447 [2024-12-15 19:41:15.111262] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1026220) on tqpair=0xfed540 00:20:28.447 [2024-12-15 19:41:15.111269] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:28.447 [2024-12-15 19:41:15.111278] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:28.447 [2024-12-15 19:41:15.111285] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.447 [2024-12-15 19:41:15.111289] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.447 [2024-12-15 19:41:15.111292] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfed540) 00:20:28.447 [2024-12-15 19:41:15.111300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.447 [2024-12-15 19:41:15.111318] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1026220, cid 0, qid 0 00:20:28.447 [2024-12-15 19:41:15.111384] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.447 [2024-12-15 19:41:15.111391] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.447 [2024-12-15 19:41:15.111394] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.447 [2024-12-15 19:41:15.111398] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1026220) on tqpair=0xfed540 00:20:28.447 [2024-12-15 19:41:15.111404] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:28.447 [2024-12-15 19:41:15.111414] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.447 [2024-12-15 19:41:15.111419] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.447 [2024-12-15 19:41:15.111422] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfed540) 00:20:28.447 [2024-12-15 19:41:15.111429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.447 [2024-12-15 19:41:15.111447] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1026220, cid 0, qid 0 00:20:28.447 [2024-12-15 19:41:15.111513] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.447 [2024-12-15 19:41:15.111520] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.447 [2024-12-15 19:41:15.111523] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.447 [2024-12-15 19:41:15.111529] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1026220) on tqpair=0xfed540 00:20:28.447 [2024-12-15 19:41:15.111535] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:28.447 [2024-12-15 19:41:15.111540] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:28.447 [2024-12-15 19:41:15.111548] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:28.447 [2024-12-15 19:41:15.111653] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:28.447 [2024-12-15 19:41:15.111659] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:28.447 [2024-12-15 19:41:15.111668] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.447 [2024-12-15 19:41:15.111672] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.447 [2024-12-15 19:41:15.111676] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfed540) 00:20:28.447 [2024-12-15 19:41:15.111683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.447 [2024-12-15 19:41:15.111702] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1026220, cid 0, qid 0 00:20:28.447 [2024-12-15 19:41:15.111767] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.447 [2024-12-15 19:41:15.111774] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.447 [2024-12-15 19:41:15.111777] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.447 [2024-12-15 19:41:15.111781] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1026220) on tqpair=0xfed540 00:20:28.447 [2024-12-15 19:41:15.111787] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:28.447 [2024-12-15 19:41:15.111796] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.447 [2024-12-15 19:41:15.111801] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.447 [2024-12-15 19:41:15.111804] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfed540) 00:20:28.447 [2024-12-15 19:41:15.111811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.447 [2024-12-15 19:41:15.111843] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1026220, cid 0, qid 0 00:20:28.447 [2024-12-15 19:41:15.111924] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.447 [2024-12-15 19:41:15.111930] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.447 [2024-12-15 19:41:15.111934] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.447 [2024-12-15 19:41:15.111938] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1026220) on tqpair=0xfed540 00:20:28.447 [2024-12-15 19:41:15.111943] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:28.447 [2024-12-15 19:41:15.111949] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:28.447 [2024-12-15 19:41:15.111956] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:28.447 [2024-12-15 19:41:15.111973] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:28.447 [2024-12-15 19:41:15.111984] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.447 [2024-12-15 19:41:15.111987] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.447 [2024-12-15 19:41:15.111991] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfed540) 00:20:28.447 [2024-12-15 19:41:15.111999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.447 [2024-12-15 19:41:15.112025] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1026220, cid 0, qid 0 00:20:28.447 [2024-12-15 19:41:15.112126] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:28.447 [2024-12-15 19:41:15.112132] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:28.447 [2024-12-15 19:41:15.112136] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:28.447 [2024-12-15 19:41:15.112141] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfed540): datao=0, datal=4096, cccid=0 00:20:28.447 [2024-12-15 19:41:15.112146] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1026220) on tqpair(0xfed540): expected_datao=0, payload_size=4096 00:20:28.447 [2024-12-15 19:41:15.112155] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:28.447 [2024-12-15 19:41:15.112160] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:28.447 [2024-12-15 19:41:15.112168] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.447 [2024-12-15 19:41:15.112174] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.447 [2024-12-15 19:41:15.112178] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.447 [2024-12-15 19:41:15.112181] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1026220) on tqpair=0xfed540 00:20:28.447 [2024-12-15 19:41:15.112190] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:28.447 [2024-12-15 19:41:15.112196] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:28.447 [2024-12-15 19:41:15.112200] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:28.447 [2024-12-15 19:41:15.112205] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:28.447 [2024-12-15 19:41:15.112210] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:28.447 [2024-12-15 19:41:15.112215] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:28.447 [2024-12-15 19:41:15.112239] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:28.447 [2024-12-15 19:41:15.112247] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.447 [2024-12-15 19:41:15.112252] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.447 [2024-12-15 19:41:15.112255] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfed540) 00:20:28.447 [2024-12-15 19:41:15.112269] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:28.447 [2024-12-15 19:41:15.112289] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1026220, cid 0, qid 0 00:20:28.447 [2024-12-15 19:41:15.112371] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.447 [2024-12-15 19:41:15.112377] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.447 [2024-12-15 19:41:15.112380] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.447 [2024-12-15 19:41:15.112384] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1026220) on tqpair=0xfed540 00:20:28.447 [2024-12-15 19:41:15.112393] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.447 [2024-12-15 19:41:15.112397] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.447 [2024-12-15 19:41:15.112401] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfed540) 00:20:28.447 [2024-12-15 19:41:15.112407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.447 [2024-12-15 19:41:15.112414] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.447 [2024-12-15 19:41:15.112417] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.447 [2024-12-15 19:41:15.112421] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xfed540) 00:20:28.447 [2024-12-15 19:41:15.112426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.447 [2024-12-15 19:41:15.112432] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.447 [2024-12-15 19:41:15.112436] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.447 [2024-12-15 19:41:15.112439] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xfed540) 00:20:28.447 [2024-12-15 19:41:15.112445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.447 [2024-12-15 19:41:15.112450] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.447 [2024-12-15 19:41:15.112454] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.447 [2024-12-15 19:41:15.112457] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfed540) 00:20:28.447 [2024-12-15 19:41:15.112463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.447 [2024-12-15 19:41:15.112479] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:28.448 [2024-12-15 19:41:15.112492] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:28.448 [2024-12-15 19:41:15.112499] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.448 [2024-12-15 19:41:15.112503] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.448 [2024-12-15 19:41:15.112507] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfed540) 00:20:28.448 [2024-12-15 19:41:15.112514] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.448 [2024-12-15 19:41:15.112534] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1026220, cid 0, qid 0 00:20:28.448 [2024-12-15 19:41:15.112540] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1026380, cid 1, qid 0 00:20:28.448 [2024-12-15 19:41:15.112545] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10264e0, cid 2, qid 0 00:20:28.448 [2024-12-15 19:41:15.112550] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1026640, cid 3, qid 0 00:20:28.448 [2024-12-15 19:41:15.112554] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10267a0, cid 4, qid 0 00:20:28.448 [2024-12-15 19:41:15.112661] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.448 [2024-12-15 19:41:15.112668] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.448 [2024-12-15 19:41:15.112671] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.448 [2024-12-15 19:41:15.112675] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10267a0) on tqpair=0xfed540 00:20:28.448 [2024-12-15 19:41:15.112681] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:28.448 [2024-12-15 19:41:15.112686] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:28.448 [2024-12-15 19:41:15.112697] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.448 [2024-12-15 19:41:15.112701] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.448 [2024-12-15 19:41:15.112705] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfed540) 00:20:28.448 [2024-12-15 19:41:15.112712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.448 [2024-12-15 19:41:15.112729] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10267a0, cid 4, qid 0 00:20:28.448 [2024-12-15 19:41:15.112813] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:28.448 [2024-12-15 19:41:15.112832] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:28.448 [2024-12-15 19:41:15.112836] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:28.448 [2024-12-15 19:41:15.112840] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfed540): datao=0, datal=4096, cccid=4 00:20:28.448 [2024-12-15 19:41:15.112844] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10267a0) on tqpair(0xfed540): expected_datao=0, payload_size=4096 00:20:28.448 [2024-12-15 19:41:15.112852] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:28.448 [2024-12-15 19:41:15.112856] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:28.448 [2024-12-15 19:41:15.112864] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.448 [2024-12-15 19:41:15.112870] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.448 [2024-12-15 19:41:15.112873] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.448 [2024-12-15 19:41:15.112877] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10267a0) on tqpair=0xfed540 00:20:28.448 [2024-12-15 19:41:15.112890] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:28.448 [2024-12-15 19:41:15.112934] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.448 [2024-12-15 19:41:15.112952] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.448 [2024-12-15 19:41:15.112955] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfed540) 00:20:28.448 [2024-12-15 19:41:15.112963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.448 [2024-12-15 19:41:15.112970] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.448 [2024-12-15 19:41:15.112974] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.448 [2024-12-15 19:41:15.112977] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfed540) 00:20:28.448 [2024-12-15 19:41:15.112995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.448 [2024-12-15 19:41:15.113021] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10267a0, cid 4, qid 0 00:20:28.448 [2024-12-15 19:41:15.113028] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1026900, cid 5, qid 0 00:20:28.448 [2024-12-15 19:41:15.113140] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:28.448 [2024-12-15 19:41:15.113160] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:28.448 [2024-12-15 19:41:15.113165] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:28.448 [2024-12-15 19:41:15.113168] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfed540): datao=0, datal=1024, cccid=4 00:20:28.448 [2024-12-15 19:41:15.113173] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10267a0) on tqpair(0xfed540): expected_datao=0, payload_size=1024 00:20:28.448 [2024-12-15 19:41:15.113180] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:28.448 [2024-12-15 19:41:15.113184] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:28.448 [2024-12-15 19:41:15.113190] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.448 [2024-12-15 19:41:15.113195] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.448 [2024-12-15 19:41:15.113199] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.448 [2024-12-15 19:41:15.113203] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1026900) on tqpair=0xfed540 00:20:28.448 [2024-12-15 19:41:15.154910] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.448 [2024-12-15 19:41:15.154930] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.448 [2024-12-15 19:41:15.154935] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.448 [2024-12-15 19:41:15.154955] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10267a0) on tqpair=0xfed540 00:20:28.448 [2024-12-15 19:41:15.154970] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.448 [2024-12-15 19:41:15.154974] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.448 [2024-12-15 19:41:15.154978] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfed540) 00:20:28.448 [2024-12-15 19:41:15.154986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.448 [2024-12-15 19:41:15.155015] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10267a0, cid 4, qid 0 00:20:28.448 [2024-12-15 19:41:15.155099] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:28.448 [2024-12-15 19:41:15.155106] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:28.448 [2024-12-15 19:41:15.155109] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:28.448 [2024-12-15 19:41:15.155113] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfed540): datao=0, datal=3072, cccid=4 00:20:28.448 [2024-12-15 19:41:15.155117] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10267a0) on tqpair(0xfed540): expected_datao=0, payload_size=3072 00:20:28.448 [2024-12-15 19:41:15.155124] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:28.448 [2024-12-15 19:41:15.155128] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:28.448 [2024-12-15 19:41:15.155135] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.448 [2024-12-15 19:41:15.155140] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.448 [2024-12-15 19:41:15.155144] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.448 [2024-12-15 19:41:15.155147] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10267a0) on tqpair=0xfed540 00:20:28.448 [2024-12-15 19:41:15.155158] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.448 [2024-12-15 19:41:15.155162] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.448 [2024-12-15 19:41:15.155165] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfed540) 00:20:28.448 [2024-12-15 19:41:15.155188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.448 [2024-12-15 19:41:15.155227] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10267a0, cid 4, qid 0 00:20:28.448 [2024-12-15 19:41:15.155318] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:28.448 [2024-12-15 19:41:15.155325] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:28.448 [2024-12-15 19:41:15.155328] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:28.448 [2024-12-15 19:41:15.155332] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfed540): datao=0, datal=8, cccid=4 00:20:28.448 [2024-12-15 19:41:15.155336] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10267a0) on tqpair(0xfed540): expected_datao=0, payload_size=8 00:20:28.448 [2024-12-15 19:41:15.155343] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:28.448 [2024-12-15 19:41:15.155347] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:28.448 ===================================================== 00:20:28.448 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:28.448 ===================================================== 00:20:28.448 Controller Capabilities/Features 00:20:28.448 ================================ 00:20:28.448 Vendor ID: 0000 00:20:28.448 Subsystem Vendor ID: 0000 00:20:28.448 Serial Number: .................... 00:20:28.448 Model Number: ........................................ 00:20:28.448 Firmware Version: 24.01.1 00:20:28.448 Recommended Arb Burst: 0 00:20:28.448 IEEE OUI Identifier: 00 00 00 00:20:28.448 Multi-path I/O 00:20:28.448 May have multiple subsystem ports: No 00:20:28.448 May have multiple controllers: No 00:20:28.448 Associated with SR-IOV VF: No 00:20:28.448 Max Data Transfer Size: 131072 00:20:28.448 Max Number of Namespaces: 0 00:20:28.448 Max Number of I/O Queues: 1024 00:20:28.448 NVMe Specification Version (VS): 1.3 00:20:28.448 NVMe Specification Version (Identify): 1.3 00:20:28.448 Maximum Queue Entries: 128 00:20:28.448 Contiguous Queues Required: Yes 00:20:28.448 Arbitration Mechanisms Supported 00:20:28.448 Weighted Round Robin: Not Supported 00:20:28.448 Vendor Specific: Not Supported 00:20:28.448 Reset Timeout: 15000 ms 00:20:28.448 Doorbell Stride: 4 bytes 00:20:28.448 NVM Subsystem Reset: Not Supported 00:20:28.448 Command Sets Supported 00:20:28.448 NVM Command Set: Supported 00:20:28.448 Boot Partition: Not Supported 00:20:28.448 Memory Page Size Minimum: 4096 bytes 00:20:28.448 Memory Page Size Maximum: 4096 bytes 00:20:28.448 Persistent Memory Region: Not Supported 00:20:28.448 Optional Asynchronous Events Supported 00:20:28.449 Namespace Attribute Notices: Not Supported 00:20:28.449 Firmware Activation Notices: Not Supported 00:20:28.449 ANA Change Notices: Not Supported 00:20:28.449 PLE Aggregate Log Change Notices: Not Supported 00:20:28.449 LBA Status Info Alert Notices: Not Supported 00:20:28.449 EGE Aggregate Log Change Notices: Not Supported 00:20:28.449 Normal NVM Subsystem Shutdown event: Not Supported 00:20:28.449 Zone Descriptor Change Notices: Not Supported 00:20:28.449 Discovery Log Change Notices: Supported 00:20:28.449 Controller Attributes 00:20:28.449 128-bit Host Identifier: Not Supported 00:20:28.449 Non-Operational Permissive Mode: Not Supported 00:20:28.449 NVM Sets: Not Supported 00:20:28.449 Read Recovery Levels: Not Supported 00:20:28.449 Endurance Groups: Not Supported 00:20:28.449 Predictable Latency Mode: Not Supported 00:20:28.449 Traffic Based Keep ALive: Not Supported 00:20:28.449 Namespace Granularity: Not Supported 00:20:28.449 SQ Associations: Not Supported 00:20:28.449 UUID List: Not Supported 00:20:28.449 Multi-Domain Subsystem: Not Supported 00:20:28.449 Fixed Capacity Management: Not Supported 00:20:28.449 Variable Capacity Management: Not Supported 00:20:28.449 Delete Endurance Group: Not Supported 00:20:28.449 Delete NVM Set: Not Supported 00:20:28.449 Extended LBA Formats Supported: Not Supported 00:20:28.449 Flexible Data Placement Supported: Not Supported 00:20:28.449 00:20:28.449 Controller Memory Buffer Support 00:20:28.449 ================================ 00:20:28.449 Supported: No 00:20:28.449 00:20:28.449 Persistent Memory Region Support 00:20:28.449 ================================ 00:20:28.449 Supported: No 00:20:28.449 00:20:28.449 Admin Command Set Attributes 00:20:28.449 ============================ 00:20:28.449 Security Send/Receive: Not Supported 00:20:28.449 Format NVM: Not Supported 00:20:28.449 Firmware Activate/Download: Not Supported 00:20:28.449 Namespace Management: Not Supported 00:20:28.449 Device Self-Test: Not Supported 00:20:28.449 Directives: Not Supported 00:20:28.449 NVMe-MI: Not Supported 00:20:28.449 Virtualization Management: Not Supported 00:20:28.449 Doorbell Buffer Config: Not Supported 00:20:28.449 Get LBA Status Capability: Not Supported 00:20:28.449 Command & Feature Lockdown Capability: Not Supported 00:20:28.449 Abort Command Limit: 1 00:20:28.449 Async Event Request Limit: 4 00:20:28.449 Number of Firmware Slots: N/A 00:20:28.449 Firmware Slot 1 Read-Only: N/A 00:20:28.449 Fi[2024-12-15 19:41:15.196885] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.449 [2024-12-15 19:41:15.196908] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.449 [2024-12-15 19:41:15.196928] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.449 [2024-12-15 19:41:15.196932] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10267a0) on tqpair=0xfed540 00:20:28.449 rmware Activation Without Reset: N/A 00:20:28.449 Multiple Update Detection Support: N/A 00:20:28.449 Firmware Update Granularity: No Information Provided 00:20:28.449 Per-Namespace SMART Log: No 00:20:28.449 Asymmetric Namespace Access Log Page: Not Supported 00:20:28.449 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:28.449 Command Effects Log Page: Not Supported 00:20:28.449 Get Log Page Extended Data: Supported 00:20:28.449 Telemetry Log Pages: Not Supported 00:20:28.449 Persistent Event Log Pages: Not Supported 00:20:28.449 Supported Log Pages Log Page: May Support 00:20:28.449 Commands Supported & Effects Log Page: Not Supported 00:20:28.449 Feature Identifiers & Effects Log Page:May Support 00:20:28.449 NVMe-MI Commands & Effects Log Page: May Support 00:20:28.449 Data Area 4 for Telemetry Log: Not Supported 00:20:28.449 Error Log Page Entries Supported: 128 00:20:28.449 Keep Alive: Not Supported 00:20:28.449 00:20:28.449 NVM Command Set Attributes 00:20:28.449 ========================== 00:20:28.449 Submission Queue Entry Size 00:20:28.449 Max: 1 00:20:28.449 Min: 1 00:20:28.449 Completion Queue Entry Size 00:20:28.449 Max: 1 00:20:28.449 Min: 1 00:20:28.449 Number of Namespaces: 0 00:20:28.449 Compare Command: Not Supported 00:20:28.449 Write Uncorrectable Command: Not Supported 00:20:28.449 Dataset Management Command: Not Supported 00:20:28.449 Write Zeroes Command: Not Supported 00:20:28.449 Set Features Save Field: Not Supported 00:20:28.449 Reservations: Not Supported 00:20:28.449 Timestamp: Not Supported 00:20:28.449 Copy: Not Supported 00:20:28.449 Volatile Write Cache: Not Present 00:20:28.449 Atomic Write Unit (Normal): 1 00:20:28.449 Atomic Write Unit (PFail): 1 00:20:28.449 Atomic Compare & Write Unit: 1 00:20:28.449 Fused Compare & Write: Supported 00:20:28.449 Scatter-Gather List 00:20:28.449 SGL Command Set: Supported 00:20:28.449 SGL Keyed: Supported 00:20:28.449 SGL Bit Bucket Descriptor: Not Supported 00:20:28.449 SGL Metadata Pointer: Not Supported 00:20:28.449 Oversized SGL: Not Supported 00:20:28.449 SGL Metadata Address: Not Supported 00:20:28.449 SGL Offset: Supported 00:20:28.449 Transport SGL Data Block: Not Supported 00:20:28.449 Replay Protected Memory Block: Not Supported 00:20:28.449 00:20:28.449 Firmware Slot Information 00:20:28.449 ========================= 00:20:28.449 Active slot: 0 00:20:28.449 00:20:28.449 00:20:28.449 Error Log 00:20:28.449 ========= 00:20:28.449 00:20:28.449 Active Namespaces 00:20:28.449 ================= 00:20:28.449 Discovery Log Page 00:20:28.449 ================== 00:20:28.449 Generation Counter: 2 00:20:28.449 Number of Records: 2 00:20:28.449 Record Format: 0 00:20:28.449 00:20:28.449 Discovery Log Entry 0 00:20:28.449 ---------------------- 00:20:28.449 Transport Type: 3 (TCP) 00:20:28.449 Address Family: 1 (IPv4) 00:20:28.449 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:28.449 Entry Flags: 00:20:28.449 Duplicate Returned Information: 1 00:20:28.449 Explicit Persistent Connection Support for Discovery: 1 00:20:28.449 Transport Requirements: 00:20:28.449 Secure Channel: Not Required 00:20:28.449 Port ID: 0 (0x0000) 00:20:28.449 Controller ID: 65535 (0xffff) 00:20:28.449 Admin Max SQ Size: 128 00:20:28.449 Transport Service Identifier: 4420 00:20:28.449 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:28.449 Transport Address: 10.0.0.2 00:20:28.449 Discovery Log Entry 1 00:20:28.449 ---------------------- 00:20:28.449 Transport Type: 3 (TCP) 00:20:28.449 Address Family: 1 (IPv4) 00:20:28.449 Subsystem Type: 2 (NVM Subsystem) 00:20:28.449 Entry Flags: 00:20:28.449 Duplicate Returned Information: 0 00:20:28.449 Explicit Persistent Connection Support for Discovery: 0 00:20:28.449 Transport Requirements: 00:20:28.449 Secure Channel: Not Required 00:20:28.449 Port ID: 0 (0x0000) 00:20:28.449 Controller ID: 65535 (0xffff) 00:20:28.449 Admin Max SQ Size: 128 00:20:28.449 Transport Service Identifier: 4420 00:20:28.449 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:28.449 Transport Address: 10.0.0.2 [2024-12-15 19:41:15.197070] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:28.449 [2024-12-15 19:41:15.197089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.449 [2024-12-15 19:41:15.197096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.449 [2024-12-15 19:41:15.197101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.449 [2024-12-15 19:41:15.197107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.449 [2024-12-15 19:41:15.197116] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.449 [2024-12-15 19:41:15.197120] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.449 [2024-12-15 19:41:15.197123] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfed540) 00:20:28.449 [2024-12-15 19:41:15.197131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.449 [2024-12-15 19:41:15.197169] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1026640, cid 3, qid 0 00:20:28.449 [2024-12-15 19:41:15.197250] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.449 [2024-12-15 19:41:15.197257] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.449 [2024-12-15 19:41:15.197261] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.449 [2024-12-15 19:41:15.197265] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1026640) on tqpair=0xfed540 00:20:28.449 [2024-12-15 19:41:15.197273] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.449 [2024-12-15 19:41:15.197277] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.449 [2024-12-15 19:41:15.197281] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfed540) 00:20:28.449 [2024-12-15 19:41:15.197303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.449 [2024-12-15 19:41:15.197327] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1026640, cid 3, qid 0 00:20:28.449 [2024-12-15 19:41:15.197399] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.449 [2024-12-15 19:41:15.197406] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.450 [2024-12-15 19:41:15.197409] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.450 [2024-12-15 19:41:15.197413] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1026640) on tqpair=0xfed540 00:20:28.450 [2024-12-15 19:41:15.197419] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:28.450 [2024-12-15 19:41:15.197424] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:28.450 [2024-12-15 19:41:15.197433] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.450 [2024-12-15 19:41:15.197438] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.450 [2024-12-15 19:41:15.197441] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfed540) 00:20:28.450 [2024-12-15 19:41:15.197448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.450 [2024-12-15 19:41:15.197465] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1026640, cid 3, qid 0 00:20:28.450 [2024-12-15 19:41:15.197536] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.450 [2024-12-15 19:41:15.197542] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.450 [2024-12-15 19:41:15.197545] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.450 [2024-12-15 19:41:15.197549] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1026640) on tqpair=0xfed540 00:20:28.450 [2024-12-15 19:41:15.197560] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.450 [2024-12-15 19:41:15.197565] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.450 [2024-12-15 19:41:15.197568] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfed540) 00:20:28.450 [2024-12-15 19:41:15.197575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.450 [2024-12-15 19:41:15.197591] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1026640, cid 3, qid 0 00:20:28.450 [2024-12-15 19:41:15.197645] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.450 [2024-12-15 19:41:15.197652] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.450 [2024-12-15 19:41:15.197655] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.450 [2024-12-15 19:41:15.197659] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1026640) on tqpair=0xfed540 00:20:28.450 [2024-12-15 19:41:15.197669] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.450 [2024-12-15 19:41:15.197673] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.450 [2024-12-15 19:41:15.197677] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfed540) 00:20:28.450 [2024-12-15 19:41:15.197684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.450 [2024-12-15 19:41:15.197700] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1026640, cid 3, qid 0 00:20:28.450 [2024-12-15 19:41:15.197754] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.450 [2024-12-15 19:41:15.197761] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.450 [2024-12-15 19:41:15.197764] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.450 [2024-12-15 19:41:15.197768] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1026640) on tqpair=0xfed540 00:20:28.450 [2024-12-15 19:41:15.197778] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.450 [2024-12-15 19:41:15.197783] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.450 [2024-12-15 19:41:15.197786] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfed540) 00:20:28.450 [2024-12-15 19:41:15.197793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.450 [2024-12-15 19:41:15.197809] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1026640, cid 3, qid 0 00:20:28.450 [2024-12-15 19:41:15.197881] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.450 [2024-12-15 19:41:15.197889] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.450 [2024-12-15 19:41:15.197892] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.450 [2024-12-15 19:41:15.197896] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1026640) on tqpair=0xfed540 00:20:28.450 [2024-12-15 19:41:15.197907] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.450 [2024-12-15 19:41:15.197911] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.450 [2024-12-15 19:41:15.197915] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfed540) 00:20:28.450 [2024-12-15 19:41:15.197922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.450 [2024-12-15 19:41:15.197953] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1026640, cid 3, qid 0 00:20:28.450 [2024-12-15 19:41:15.198010] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.450 [2024-12-15 19:41:15.198016] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.450 [2024-12-15 19:41:15.198019] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.450 [2024-12-15 19:41:15.198023] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1026640) on tqpair=0xfed540 00:20:28.450 [2024-12-15 19:41:15.198033] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.450 [2024-12-15 19:41:15.198038] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.450 [2024-12-15 19:41:15.198041] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfed540) 00:20:28.450 [2024-12-15 19:41:15.198048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.450 [2024-12-15 19:41:15.198064] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1026640, cid 3, qid 0 00:20:28.450 [2024-12-15 19:41:15.198128] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.450 [2024-12-15 19:41:15.198134] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.450 [2024-12-15 19:41:15.198138] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.450 [2024-12-15 19:41:15.198141] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1026640) on tqpair=0xfed540 00:20:28.450 [2024-12-15 19:41:15.198152] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.450 [2024-12-15 19:41:15.198156] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.450 [2024-12-15 19:41:15.198159] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfed540) 00:20:28.450 [2024-12-15 19:41:15.198166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.450 [2024-12-15 19:41:15.198182] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1026640, cid 3, qid 0 00:20:28.450 [2024-12-15 19:41:15.198249] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.450 [2024-12-15 19:41:15.198255] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.450 [2024-12-15 19:41:15.198259] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.450 [2024-12-15 19:41:15.198263] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1026640) on tqpair=0xfed540 00:20:28.450 [2024-12-15 19:41:15.198273] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.450 [2024-12-15 19:41:15.198277] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.450 [2024-12-15 19:41:15.198281] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfed540) 00:20:28.450 [2024-12-15 19:41:15.198288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.450 [2024-12-15 19:41:15.198304] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1026640, cid 3, qid 0 00:20:28.450 [2024-12-15 19:41:15.198382] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.450 [2024-12-15 19:41:15.198390] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.450 [2024-12-15 19:41:15.198393] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.450 [2024-12-15 19:41:15.198397] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1026640) on tqpair=0xfed540 00:20:28.450 [2024-12-15 19:41:15.198408] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.450 [2024-12-15 19:41:15.198412] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.450 [2024-12-15 19:41:15.198416] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfed540) 00:20:28.450 [2024-12-15 19:41:15.198423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.450 [2024-12-15 19:41:15.198441] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1026640, cid 3, qid 0 00:20:28.450 [2024-12-15 19:41:15.198506] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.450 [2024-12-15 19:41:15.198512] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.450 [2024-12-15 19:41:15.198516] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.450 [2024-12-15 19:41:15.198520] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1026640) on tqpair=0xfed540 00:20:28.450 [2024-12-15 19:41:15.198531] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.450 [2024-12-15 19:41:15.198535] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.450 [2024-12-15 19:41:15.198539] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfed540) 00:20:28.450 [2024-12-15 19:41:15.198546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.450 [2024-12-15 19:41:15.198562] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1026640, cid 3, qid 0 00:20:28.450 [2024-12-15 19:41:15.198623] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.450 [2024-12-15 19:41:15.198639] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.450 [2024-12-15 19:41:15.198642] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.450 [2024-12-15 19:41:15.198646] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1026640) on tqpair=0xfed540 00:20:28.450 [2024-12-15 19:41:15.198662] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.450 [2024-12-15 19:41:15.198666] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.450 [2024-12-15 19:41:15.198670] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfed540) 00:20:28.450 [2024-12-15 19:41:15.198677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.450 [2024-12-15 19:41:15.198693] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1026640, cid 3, qid 0 00:20:28.450 [2024-12-15 19:41:15.198769] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.450 [2024-12-15 19:41:15.198775] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.450 [2024-12-15 19:41:15.198779] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.450 [2024-12-15 19:41:15.198782] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1026640) on tqpair=0xfed540 00:20:28.450 [2024-12-15 19:41:15.198793] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.450 [2024-12-15 19:41:15.198797] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.450 [2024-12-15 19:41:15.198800] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfed540) 00:20:28.450 [2024-12-15 19:41:15.198807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.450 [2024-12-15 19:41:15.202893] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1026640, cid 3, qid 0 00:20:28.451 [2024-12-15 19:41:15.202950] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.451 [2024-12-15 19:41:15.202958] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.451 [2024-12-15 19:41:15.202962] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.451 [2024-12-15 19:41:15.202965] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1026640) on tqpair=0xfed540 00:20:28.451 [2024-12-15 19:41:15.202986] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:20:28.451 00:20:28.451 19:41:15 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:28.451 [2024-12-15 19:41:15.234225] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:28.451 [2024-12-15 19:41:15.234293] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93441 ] 00:20:28.714 [2024-12-15 19:41:15.367201] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:28.714 [2024-12-15 19:41:15.367251] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:28.714 [2024-12-15 19:41:15.367264] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:28.714 [2024-12-15 19:41:15.367274] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:28.714 [2024-12-15 19:41:15.367281] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:28.714 [2024-12-15 19:41:15.367400] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:28.714 [2024-12-15 19:41:15.367456] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1ac5540 0 00:20:28.714 [2024-12-15 19:41:15.372924] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:28.714 [2024-12-15 19:41:15.372947] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:28.714 [2024-12-15 19:41:15.372952] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:28.714 [2024-12-15 19:41:15.372956] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:28.714 [2024-12-15 19:41:15.373008] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.714 [2024-12-15 19:41:15.373014] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.714 [2024-12-15 19:41:15.373018] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ac5540) 00:20:28.714 [2024-12-15 19:41:15.373028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:28.714 [2024-12-15 19:41:15.373056] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe220, cid 0, qid 0 00:20:28.714 [2024-12-15 19:41:15.380844] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.714 [2024-12-15 19:41:15.380864] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.714 [2024-12-15 19:41:15.380885] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.714 [2024-12-15 19:41:15.380890] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe220) on tqpair=0x1ac5540 00:20:28.714 [2024-12-15 19:41:15.380899] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:28.714 [2024-12-15 19:41:15.380906] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:28.714 [2024-12-15 19:41:15.380911] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:28.714 [2024-12-15 19:41:15.380924] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.714 [2024-12-15 19:41:15.380929] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.714 [2024-12-15 19:41:15.380933] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ac5540) 00:20:28.714 [2024-12-15 19:41:15.380941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.714 [2024-12-15 19:41:15.380979] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe220, cid 0, qid 0 00:20:28.714 [2024-12-15 19:41:15.381084] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.714 [2024-12-15 19:41:15.381100] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.714 [2024-12-15 19:41:15.381104] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.714 [2024-12-15 19:41:15.381108] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe220) on tqpair=0x1ac5540 00:20:28.714 [2024-12-15 19:41:15.381113] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:28.714 [2024-12-15 19:41:15.381120] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:28.714 [2024-12-15 19:41:15.381127] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.714 [2024-12-15 19:41:15.381131] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.714 [2024-12-15 19:41:15.381134] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ac5540) 00:20:28.714 [2024-12-15 19:41:15.381157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.714 [2024-12-15 19:41:15.381193] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe220, cid 0, qid 0 00:20:28.714 [2024-12-15 19:41:15.381272] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.714 [2024-12-15 19:41:15.381278] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.714 [2024-12-15 19:41:15.381282] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.714 [2024-12-15 19:41:15.381285] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe220) on tqpair=0x1ac5540 00:20:28.714 [2024-12-15 19:41:15.381292] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:28.714 [2024-12-15 19:41:15.381300] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:28.714 [2024-12-15 19:41:15.381307] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.714 [2024-12-15 19:41:15.381310] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.714 [2024-12-15 19:41:15.381314] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ac5540) 00:20:28.714 [2024-12-15 19:41:15.381331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.714 [2024-12-15 19:41:15.381350] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe220, cid 0, qid 0 00:20:28.714 [2024-12-15 19:41:15.381425] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.714 [2024-12-15 19:41:15.381432] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.714 [2024-12-15 19:41:15.381435] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.714 [2024-12-15 19:41:15.381439] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe220) on tqpair=0x1ac5540 00:20:28.714 [2024-12-15 19:41:15.381445] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:28.714 [2024-12-15 19:41:15.381455] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.714 [2024-12-15 19:41:15.381459] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.714 [2024-12-15 19:41:15.381463] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ac5540) 00:20:28.714 [2024-12-15 19:41:15.381470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.714 [2024-12-15 19:41:15.381499] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe220, cid 0, qid 0 00:20:28.714 [2024-12-15 19:41:15.381596] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.714 [2024-12-15 19:41:15.381602] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.714 [2024-12-15 19:41:15.381606] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.714 [2024-12-15 19:41:15.381610] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe220) on tqpair=0x1ac5540 00:20:28.714 [2024-12-15 19:41:15.381615] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:28.714 [2024-12-15 19:41:15.381620] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:28.714 [2024-12-15 19:41:15.381628] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:28.714 [2024-12-15 19:41:15.381733] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:28.714 [2024-12-15 19:41:15.381737] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:28.714 [2024-12-15 19:41:15.381745] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.715 [2024-12-15 19:41:15.381749] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.715 [2024-12-15 19:41:15.381753] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ac5540) 00:20:28.715 [2024-12-15 19:41:15.381760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.715 [2024-12-15 19:41:15.381779] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe220, cid 0, qid 0 00:20:28.715 [2024-12-15 19:41:15.381878] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.715 [2024-12-15 19:41:15.381887] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.715 [2024-12-15 19:41:15.381890] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.715 [2024-12-15 19:41:15.381894] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe220) on tqpair=0x1ac5540 00:20:28.715 [2024-12-15 19:41:15.381900] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:28.715 [2024-12-15 19:41:15.381910] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.715 [2024-12-15 19:41:15.381915] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.715 [2024-12-15 19:41:15.381918] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ac5540) 00:20:28.715 [2024-12-15 19:41:15.381925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.715 [2024-12-15 19:41:15.381946] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe220, cid 0, qid 0 00:20:28.715 [2024-12-15 19:41:15.382043] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.715 [2024-12-15 19:41:15.382050] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.715 [2024-12-15 19:41:15.382053] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.715 [2024-12-15 19:41:15.382057] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe220) on tqpair=0x1ac5540 00:20:28.715 [2024-12-15 19:41:15.382062] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:28.715 [2024-12-15 19:41:15.382068] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:28.715 [2024-12-15 19:41:15.382075] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:28.715 [2024-12-15 19:41:15.382090] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:28.715 [2024-12-15 19:41:15.382099] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.715 [2024-12-15 19:41:15.382103] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.715 [2024-12-15 19:41:15.382107] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ac5540) 00:20:28.715 [2024-12-15 19:41:15.382114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.715 [2024-12-15 19:41:15.382147] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe220, cid 0, qid 0 00:20:28.715 [2024-12-15 19:41:15.382281] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:28.715 [2024-12-15 19:41:15.382288] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:28.715 [2024-12-15 19:41:15.382292] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:28.715 [2024-12-15 19:41:15.382296] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ac5540): datao=0, datal=4096, cccid=0 00:20:28.715 [2024-12-15 19:41:15.382300] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1afe220) on tqpair(0x1ac5540): expected_datao=0, payload_size=4096 00:20:28.715 [2024-12-15 19:41:15.382308] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:28.715 [2024-12-15 19:41:15.382312] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:28.715 [2024-12-15 19:41:15.382342] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.715 [2024-12-15 19:41:15.382349] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.715 [2024-12-15 19:41:15.382352] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.715 [2024-12-15 19:41:15.382356] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe220) on tqpair=0x1ac5540 00:20:28.715 [2024-12-15 19:41:15.382365] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:28.715 [2024-12-15 19:41:15.382369] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:28.715 [2024-12-15 19:41:15.382374] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:28.715 [2024-12-15 19:41:15.382378] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:28.715 [2024-12-15 19:41:15.382382] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:28.715 [2024-12-15 19:41:15.382387] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:28.715 [2024-12-15 19:41:15.382400] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:28.715 [2024-12-15 19:41:15.382408] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.715 [2024-12-15 19:41:15.382412] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.715 [2024-12-15 19:41:15.382416] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ac5540) 00:20:28.715 [2024-12-15 19:41:15.382423] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:28.715 [2024-12-15 19:41:15.382445] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe220, cid 0, qid 0 00:20:28.715 [2024-12-15 19:41:15.382526] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.715 [2024-12-15 19:41:15.382532] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.715 [2024-12-15 19:41:15.382536] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.715 [2024-12-15 19:41:15.382539] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe220) on tqpair=0x1ac5540 00:20:28.715 [2024-12-15 19:41:15.382547] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.715 [2024-12-15 19:41:15.382551] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.715 [2024-12-15 19:41:15.382555] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ac5540) 00:20:28.715 [2024-12-15 19:41:15.382561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.715 [2024-12-15 19:41:15.382567] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.715 [2024-12-15 19:41:15.382571] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.715 [2024-12-15 19:41:15.382574] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1ac5540) 00:20:28.715 [2024-12-15 19:41:15.382580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.715 [2024-12-15 19:41:15.382586] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.715 [2024-12-15 19:41:15.382589] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.715 [2024-12-15 19:41:15.382593] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1ac5540) 00:20:28.715 [2024-12-15 19:41:15.382598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.715 [2024-12-15 19:41:15.382604] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.715 [2024-12-15 19:41:15.382608] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.715 [2024-12-15 19:41:15.382611] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5540) 00:20:28.715 [2024-12-15 19:41:15.382626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.715 [2024-12-15 19:41:15.382631] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:28.715 [2024-12-15 19:41:15.382654] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:28.715 [2024-12-15 19:41:15.382662] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.715 [2024-12-15 19:41:15.382672] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.715 [2024-12-15 19:41:15.382676] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ac5540) 00:20:28.715 [2024-12-15 19:41:15.382682] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.715 [2024-12-15 19:41:15.382704] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe220, cid 0, qid 0 00:20:28.715 [2024-12-15 19:41:15.382711] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe380, cid 1, qid 0 00:20:28.715 [2024-12-15 19:41:15.382716] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe4e0, cid 2, qid 0 00:20:28.715 [2024-12-15 19:41:15.382720] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe640, cid 3, qid 0 00:20:28.715 [2024-12-15 19:41:15.382725] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe7a0, cid 4, qid 0 00:20:28.715 [2024-12-15 19:41:15.382867] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.715 [2024-12-15 19:41:15.382875] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.715 [2024-12-15 19:41:15.382879] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.715 [2024-12-15 19:41:15.382882] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe7a0) on tqpair=0x1ac5540 00:20:28.715 [2024-12-15 19:41:15.382888] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:28.715 [2024-12-15 19:41:15.382893] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:28.715 [2024-12-15 19:41:15.382902] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:28.715 [2024-12-15 19:41:15.382912] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:28.715 [2024-12-15 19:41:15.382920] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.715 [2024-12-15 19:41:15.382924] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.715 [2024-12-15 19:41:15.382928] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ac5540) 00:20:28.715 [2024-12-15 19:41:15.382935] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:28.715 [2024-12-15 19:41:15.382957] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe7a0, cid 4, qid 0 00:20:28.715 [2024-12-15 19:41:15.383050] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.715 [2024-12-15 19:41:15.383056] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.715 [2024-12-15 19:41:15.383060] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.715 [2024-12-15 19:41:15.383064] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe7a0) on tqpair=0x1ac5540 00:20:28.715 [2024-12-15 19:41:15.383120] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:28.715 [2024-12-15 19:41:15.383131] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:28.716 [2024-12-15 19:41:15.383139] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.716 [2024-12-15 19:41:15.383143] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.716 [2024-12-15 19:41:15.383146] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ac5540) 00:20:28.716 [2024-12-15 19:41:15.383153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.716 [2024-12-15 19:41:15.383173] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe7a0, cid 4, qid 0 00:20:28.716 [2024-12-15 19:41:15.383260] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:28.716 [2024-12-15 19:41:15.383267] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:28.716 [2024-12-15 19:41:15.383271] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:28.716 [2024-12-15 19:41:15.383274] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ac5540): datao=0, datal=4096, cccid=4 00:20:28.716 [2024-12-15 19:41:15.383279] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1afe7a0) on tqpair(0x1ac5540): expected_datao=0, payload_size=4096 00:20:28.716 [2024-12-15 19:41:15.383287] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:28.716 [2024-12-15 19:41:15.383290] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:28.716 [2024-12-15 19:41:15.383299] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.716 [2024-12-15 19:41:15.383305] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.716 [2024-12-15 19:41:15.383308] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.716 [2024-12-15 19:41:15.383311] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe7a0) on tqpair=0x1ac5540 00:20:28.716 [2024-12-15 19:41:15.383327] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:28.716 [2024-12-15 19:41:15.383337] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:28.716 [2024-12-15 19:41:15.383347] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:28.716 [2024-12-15 19:41:15.383355] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.716 [2024-12-15 19:41:15.383358] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.716 [2024-12-15 19:41:15.383362] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ac5540) 00:20:28.716 [2024-12-15 19:41:15.383369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.716 [2024-12-15 19:41:15.383389] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe7a0, cid 4, qid 0 00:20:28.716 [2024-12-15 19:41:15.383492] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:28.716 [2024-12-15 19:41:15.383499] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:28.716 [2024-12-15 19:41:15.383502] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:28.716 [2024-12-15 19:41:15.383506] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ac5540): datao=0, datal=4096, cccid=4 00:20:28.716 [2024-12-15 19:41:15.383510] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1afe7a0) on tqpair(0x1ac5540): expected_datao=0, payload_size=4096 00:20:28.716 [2024-12-15 19:41:15.383518] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:28.716 [2024-12-15 19:41:15.383521] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:28.716 [2024-12-15 19:41:15.383530] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.716 [2024-12-15 19:41:15.383535] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.716 [2024-12-15 19:41:15.383539] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.716 [2024-12-15 19:41:15.383542] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe7a0) on tqpair=0x1ac5540 00:20:28.716 [2024-12-15 19:41:15.383569] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:28.716 [2024-12-15 19:41:15.383580] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:28.716 [2024-12-15 19:41:15.383588] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.716 [2024-12-15 19:41:15.383592] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.716 [2024-12-15 19:41:15.383595] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ac5540) 00:20:28.716 [2024-12-15 19:41:15.383602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.716 [2024-12-15 19:41:15.383623] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe7a0, cid 4, qid 0 00:20:28.716 [2024-12-15 19:41:15.383696] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:28.716 [2024-12-15 19:41:15.383703] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:28.716 [2024-12-15 19:41:15.383706] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:28.716 [2024-12-15 19:41:15.383710] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ac5540): datao=0, datal=4096, cccid=4 00:20:28.716 [2024-12-15 19:41:15.383714] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1afe7a0) on tqpair(0x1ac5540): expected_datao=0, payload_size=4096 00:20:28.716 [2024-12-15 19:41:15.383721] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:28.716 [2024-12-15 19:41:15.383725] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:28.716 [2024-12-15 19:41:15.383739] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.716 [2024-12-15 19:41:15.383745] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.716 [2024-12-15 19:41:15.383749] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.716 [2024-12-15 19:41:15.383752] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe7a0) on tqpair=0x1ac5540 00:20:28.716 [2024-12-15 19:41:15.383762] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:28.716 [2024-12-15 19:41:15.383770] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:28.716 [2024-12-15 19:41:15.383780] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:28.716 [2024-12-15 19:41:15.383787] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:28.716 [2024-12-15 19:41:15.383792] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:28.716 [2024-12-15 19:41:15.383797] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:28.716 [2024-12-15 19:41:15.383802] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:28.716 [2024-12-15 19:41:15.383807] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:28.716 [2024-12-15 19:41:15.383838] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.716 [2024-12-15 19:41:15.383844] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.716 [2024-12-15 19:41:15.383848] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ac5540) 00:20:28.716 [2024-12-15 19:41:15.383855] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.716 [2024-12-15 19:41:15.383862] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.716 [2024-12-15 19:41:15.383866] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.716 [2024-12-15 19:41:15.383869] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ac5540) 00:20:28.716 [2024-12-15 19:41:15.383875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.716 [2024-12-15 19:41:15.383901] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe7a0, cid 4, qid 0 00:20:28.716 [2024-12-15 19:41:15.383909] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe900, cid 5, qid 0 00:20:28.716 [2024-12-15 19:41:15.383999] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.716 [2024-12-15 19:41:15.384006] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.716 [2024-12-15 19:41:15.384010] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.716 [2024-12-15 19:41:15.384014] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe7a0) on tqpair=0x1ac5540 00:20:28.716 [2024-12-15 19:41:15.384021] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.716 [2024-12-15 19:41:15.384026] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.716 [2024-12-15 19:41:15.384030] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.716 [2024-12-15 19:41:15.384033] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe900) on tqpair=0x1ac5540 00:20:28.716 [2024-12-15 19:41:15.384044] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.716 [2024-12-15 19:41:15.384048] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.716 [2024-12-15 19:41:15.384052] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ac5540) 00:20:28.716 [2024-12-15 19:41:15.384059] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.716 [2024-12-15 19:41:15.384078] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe900, cid 5, qid 0 00:20:28.716 [2024-12-15 19:41:15.384138] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.716 [2024-12-15 19:41:15.384144] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.716 [2024-12-15 19:41:15.384148] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.716 [2024-12-15 19:41:15.384152] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe900) on tqpair=0x1ac5540 00:20:28.716 [2024-12-15 19:41:15.384162] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.716 [2024-12-15 19:41:15.384167] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.716 [2024-12-15 19:41:15.384170] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ac5540) 00:20:28.716 [2024-12-15 19:41:15.384177] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.716 [2024-12-15 19:41:15.384195] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe900, cid 5, qid 0 00:20:28.716 [2024-12-15 19:41:15.384268] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.716 [2024-12-15 19:41:15.384275] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.716 [2024-12-15 19:41:15.384278] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.716 [2024-12-15 19:41:15.384282] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe900) on tqpair=0x1ac5540 00:20:28.716 [2024-12-15 19:41:15.384293] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.716 [2024-12-15 19:41:15.384297] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.716 [2024-12-15 19:41:15.384301] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ac5540) 00:20:28.716 [2024-12-15 19:41:15.384307] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.716 [2024-12-15 19:41:15.384325] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe900, cid 5, qid 0 00:20:28.716 [2024-12-15 19:41:15.384383] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.716 [2024-12-15 19:41:15.384390] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.717 [2024-12-15 19:41:15.384394] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.717 [2024-12-15 19:41:15.384397] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe900) on tqpair=0x1ac5540 00:20:28.717 [2024-12-15 19:41:15.384410] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.717 [2024-12-15 19:41:15.384415] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.717 [2024-12-15 19:41:15.384419] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ac5540) 00:20:28.717 [2024-12-15 19:41:15.384425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.717 [2024-12-15 19:41:15.384432] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.717 [2024-12-15 19:41:15.384436] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.717 [2024-12-15 19:41:15.384440] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ac5540) 00:20:28.717 [2024-12-15 19:41:15.384446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.717 [2024-12-15 19:41:15.384452] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.717 [2024-12-15 19:41:15.384456] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.717 [2024-12-15 19:41:15.384459] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1ac5540) 00:20:28.717 [2024-12-15 19:41:15.384465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.717 [2024-12-15 19:41:15.384472] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.717 [2024-12-15 19:41:15.384476] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.717 [2024-12-15 19:41:15.384479] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1ac5540) 00:20:28.717 [2024-12-15 19:41:15.384485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.717 [2024-12-15 19:41:15.384505] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe900, cid 5, qid 0 00:20:28.717 [2024-12-15 19:41:15.384512] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe7a0, cid 4, qid 0 00:20:28.717 [2024-12-15 19:41:15.384516] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afea60, cid 6, qid 0 00:20:28.717 [2024-12-15 19:41:15.384521] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afebc0, cid 7, qid 0 00:20:28.717 [2024-12-15 19:41:15.384657] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:28.717 [2024-12-15 19:41:15.384664] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:28.717 [2024-12-15 19:41:15.384668] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:28.717 [2024-12-15 19:41:15.384671] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ac5540): datao=0, datal=8192, cccid=5 00:20:28.717 [2024-12-15 19:41:15.384676] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1afe900) on tqpair(0x1ac5540): expected_datao=0, payload_size=8192 00:20:28.717 [2024-12-15 19:41:15.384693] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:28.717 [2024-12-15 19:41:15.384698] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:28.717 [2024-12-15 19:41:15.384703] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:28.717 [2024-12-15 19:41:15.384709] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:28.717 [2024-12-15 19:41:15.384712] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:28.717 [2024-12-15 19:41:15.384715] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ac5540): datao=0, datal=512, cccid=4 00:20:28.717 [2024-12-15 19:41:15.384720] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1afe7a0) on tqpair(0x1ac5540): expected_datao=0, payload_size=512 00:20:28.717 [2024-12-15 19:41:15.384727] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:28.717 [2024-12-15 19:41:15.384730] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:28.717 [2024-12-15 19:41:15.384735] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:28.717 [2024-12-15 19:41:15.384741] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:28.717 [2024-12-15 19:41:15.384744] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:28.717 [2024-12-15 19:41:15.384747] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ac5540): datao=0, datal=512, cccid=6 00:20:28.717 [2024-12-15 19:41:15.384751] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1afea60) on tqpair(0x1ac5540): expected_datao=0, payload_size=512 00:20:28.717 [2024-12-15 19:41:15.384758] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:28.717 [2024-12-15 19:41:15.384761] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:28.717 [2024-12-15 19:41:15.384766] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:28.717 [2024-12-15 19:41:15.384772] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:28.717 [2024-12-15 19:41:15.384775] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:28.717 [2024-12-15 19:41:15.384778] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ac5540): datao=0, datal=4096, cccid=7 00:20:28.717 [2024-12-15 19:41:15.384782] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1afebc0) on tqpair(0x1ac5540): expected_datao=0, payload_size=4096 00:20:28.717 [2024-12-15 19:41:15.384789] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:28.717 [2024-12-15 19:41:15.384792] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:28.717 [2024-12-15 19:41:15.384800] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.717 [2024-12-15 19:41:15.384806] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.717 [2024-12-15 19:41:15.384809] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.717 [2024-12-15 19:41:15.384813] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe900) on tqpair=0x1ac5540 00:20:28.717 [2024-12-15 19:41:15.388879] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.717 [2024-12-15 19:41:15.388890] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.717 [2024-12-15 19:41:15.388893] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.717 [2024-12-15 19:41:15.388897] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe7a0) on tqpair=0x1ac5540 00:20:28.717 [2024-12-15 19:41:15.388908] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.717 [2024-12-15 19:41:15.388914] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.717 [2024-12-15 19:41:15.388917] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.717 [2024-12-15 19:41:15.388921] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afea60) on tqpair=0x1ac5540 00:20:28.717 [2024-12-15 19:41:15.388929] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.717 [2024-12-15 19:41:15.388934] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.717 [2024-12-15 19:41:15.388938] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.717 [2024-12-15 19:41:15.388942] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afebc0) on tqpair=0x1ac5540 00:20:28.717 ===================================================== 00:20:28.717 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:28.717 ===================================================== 00:20:28.717 Controller Capabilities/Features 00:20:28.717 ================================ 00:20:28.717 Vendor ID: 8086 00:20:28.717 Subsystem Vendor ID: 8086 00:20:28.717 Serial Number: SPDK00000000000001 00:20:28.717 Model Number: SPDK bdev Controller 00:20:28.717 Firmware Version: 24.01.1 00:20:28.717 Recommended Arb Burst: 6 00:20:28.717 IEEE OUI Identifier: e4 d2 5c 00:20:28.717 Multi-path I/O 00:20:28.717 May have multiple subsystem ports: Yes 00:20:28.717 May have multiple controllers: Yes 00:20:28.717 Associated with SR-IOV VF: No 00:20:28.717 Max Data Transfer Size: 131072 00:20:28.717 Max Number of Namespaces: 32 00:20:28.717 Max Number of I/O Queues: 127 00:20:28.717 NVMe Specification Version (VS): 1.3 00:20:28.717 NVMe Specification Version (Identify): 1.3 00:20:28.717 Maximum Queue Entries: 128 00:20:28.717 Contiguous Queues Required: Yes 00:20:28.717 Arbitration Mechanisms Supported 00:20:28.717 Weighted Round Robin: Not Supported 00:20:28.717 Vendor Specific: Not Supported 00:20:28.717 Reset Timeout: 15000 ms 00:20:28.717 Doorbell Stride: 4 bytes 00:20:28.717 NVM Subsystem Reset: Not Supported 00:20:28.717 Command Sets Supported 00:20:28.717 NVM Command Set: Supported 00:20:28.717 Boot Partition: Not Supported 00:20:28.717 Memory Page Size Minimum: 4096 bytes 00:20:28.717 Memory Page Size Maximum: 4096 bytes 00:20:28.717 Persistent Memory Region: Not Supported 00:20:28.717 Optional Asynchronous Events Supported 00:20:28.717 Namespace Attribute Notices: Supported 00:20:28.717 Firmware Activation Notices: Not Supported 00:20:28.717 ANA Change Notices: Not Supported 00:20:28.717 PLE Aggregate Log Change Notices: Not Supported 00:20:28.717 LBA Status Info Alert Notices: Not Supported 00:20:28.717 EGE Aggregate Log Change Notices: Not Supported 00:20:28.717 Normal NVM Subsystem Shutdown event: Not Supported 00:20:28.717 Zone Descriptor Change Notices: Not Supported 00:20:28.717 Discovery Log Change Notices: Not Supported 00:20:28.717 Controller Attributes 00:20:28.717 128-bit Host Identifier: Supported 00:20:28.717 Non-Operational Permissive Mode: Not Supported 00:20:28.717 NVM Sets: Not Supported 00:20:28.717 Read Recovery Levels: Not Supported 00:20:28.717 Endurance Groups: Not Supported 00:20:28.717 Predictable Latency Mode: Not Supported 00:20:28.717 Traffic Based Keep ALive: Not Supported 00:20:28.717 Namespace Granularity: Not Supported 00:20:28.717 SQ Associations: Not Supported 00:20:28.717 UUID List: Not Supported 00:20:28.717 Multi-Domain Subsystem: Not Supported 00:20:28.717 Fixed Capacity Management: Not Supported 00:20:28.717 Variable Capacity Management: Not Supported 00:20:28.717 Delete Endurance Group: Not Supported 00:20:28.717 Delete NVM Set: Not Supported 00:20:28.717 Extended LBA Formats Supported: Not Supported 00:20:28.717 Flexible Data Placement Supported: Not Supported 00:20:28.717 00:20:28.717 Controller Memory Buffer Support 00:20:28.717 ================================ 00:20:28.717 Supported: No 00:20:28.717 00:20:28.717 Persistent Memory Region Support 00:20:28.717 ================================ 00:20:28.717 Supported: No 00:20:28.717 00:20:28.717 Admin Command Set Attributes 00:20:28.717 ============================ 00:20:28.717 Security Send/Receive: Not Supported 00:20:28.718 Format NVM: Not Supported 00:20:28.718 Firmware Activate/Download: Not Supported 00:20:28.718 Namespace Management: Not Supported 00:20:28.718 Device Self-Test: Not Supported 00:20:28.718 Directives: Not Supported 00:20:28.718 NVMe-MI: Not Supported 00:20:28.718 Virtualization Management: Not Supported 00:20:28.718 Doorbell Buffer Config: Not Supported 00:20:28.718 Get LBA Status Capability: Not Supported 00:20:28.718 Command & Feature Lockdown Capability: Not Supported 00:20:28.718 Abort Command Limit: 4 00:20:28.718 Async Event Request Limit: 4 00:20:28.718 Number of Firmware Slots: N/A 00:20:28.718 Firmware Slot 1 Read-Only: N/A 00:20:28.718 Firmware Activation Without Reset: N/A 00:20:28.718 Multiple Update Detection Support: N/A 00:20:28.718 Firmware Update Granularity: No Information Provided 00:20:28.718 Per-Namespace SMART Log: No 00:20:28.718 Asymmetric Namespace Access Log Page: Not Supported 00:20:28.718 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:28.718 Command Effects Log Page: Supported 00:20:28.718 Get Log Page Extended Data: Supported 00:20:28.718 Telemetry Log Pages: Not Supported 00:20:28.718 Persistent Event Log Pages: Not Supported 00:20:28.718 Supported Log Pages Log Page: May Support 00:20:28.718 Commands Supported & Effects Log Page: Not Supported 00:20:28.718 Feature Identifiers & Effects Log Page:May Support 00:20:28.718 NVMe-MI Commands & Effects Log Page: May Support 00:20:28.718 Data Area 4 for Telemetry Log: Not Supported 00:20:28.718 Error Log Page Entries Supported: 128 00:20:28.718 Keep Alive: Supported 00:20:28.718 Keep Alive Granularity: 10000 ms 00:20:28.718 00:20:28.718 NVM Command Set Attributes 00:20:28.718 ========================== 00:20:28.718 Submission Queue Entry Size 00:20:28.718 Max: 64 00:20:28.718 Min: 64 00:20:28.718 Completion Queue Entry Size 00:20:28.718 Max: 16 00:20:28.718 Min: 16 00:20:28.718 Number of Namespaces: 32 00:20:28.718 Compare Command: Supported 00:20:28.718 Write Uncorrectable Command: Not Supported 00:20:28.718 Dataset Management Command: Supported 00:20:28.718 Write Zeroes Command: Supported 00:20:28.718 Set Features Save Field: Not Supported 00:20:28.718 Reservations: Supported 00:20:28.718 Timestamp: Not Supported 00:20:28.718 Copy: Supported 00:20:28.718 Volatile Write Cache: Present 00:20:28.718 Atomic Write Unit (Normal): 1 00:20:28.718 Atomic Write Unit (PFail): 1 00:20:28.718 Atomic Compare & Write Unit: 1 00:20:28.718 Fused Compare & Write: Supported 00:20:28.718 Scatter-Gather List 00:20:28.718 SGL Command Set: Supported 00:20:28.718 SGL Keyed: Supported 00:20:28.718 SGL Bit Bucket Descriptor: Not Supported 00:20:28.718 SGL Metadata Pointer: Not Supported 00:20:28.718 Oversized SGL: Not Supported 00:20:28.718 SGL Metadata Address: Not Supported 00:20:28.718 SGL Offset: Supported 00:20:28.718 Transport SGL Data Block: Not Supported 00:20:28.718 Replay Protected Memory Block: Not Supported 00:20:28.718 00:20:28.718 Firmware Slot Information 00:20:28.718 ========================= 00:20:28.718 Active slot: 1 00:20:28.718 Slot 1 Firmware Revision: 24.01.1 00:20:28.718 00:20:28.718 00:20:28.718 Commands Supported and Effects 00:20:28.718 ============================== 00:20:28.718 Admin Commands 00:20:28.718 -------------- 00:20:28.718 Get Log Page (02h): Supported 00:20:28.718 Identify (06h): Supported 00:20:28.718 Abort (08h): Supported 00:20:28.718 Set Features (09h): Supported 00:20:28.718 Get Features (0Ah): Supported 00:20:28.718 Asynchronous Event Request (0Ch): Supported 00:20:28.718 Keep Alive (18h): Supported 00:20:28.718 I/O Commands 00:20:28.718 ------------ 00:20:28.718 Flush (00h): Supported LBA-Change 00:20:28.718 Write (01h): Supported LBA-Change 00:20:28.718 Read (02h): Supported 00:20:28.718 Compare (05h): Supported 00:20:28.718 Write Zeroes (08h): Supported LBA-Change 00:20:28.718 Dataset Management (09h): Supported LBA-Change 00:20:28.718 Copy (19h): Supported LBA-Change 00:20:28.718 Unknown (79h): Supported LBA-Change 00:20:28.718 Unknown (7Ah): Supported 00:20:28.718 00:20:28.718 Error Log 00:20:28.718 ========= 00:20:28.718 00:20:28.718 Arbitration 00:20:28.718 =========== 00:20:28.718 Arbitration Burst: 1 00:20:28.718 00:20:28.718 Power Management 00:20:28.718 ================ 00:20:28.718 Number of Power States: 1 00:20:28.718 Current Power State: Power State #0 00:20:28.718 Power State #0: 00:20:28.718 Max Power: 0.00 W 00:20:28.718 Non-Operational State: Operational 00:20:28.718 Entry Latency: Not Reported 00:20:28.718 Exit Latency: Not Reported 00:20:28.718 Relative Read Throughput: 0 00:20:28.718 Relative Read Latency: 0 00:20:28.718 Relative Write Throughput: 0 00:20:28.718 Relative Write Latency: 0 00:20:28.718 Idle Power: Not Reported 00:20:28.718 Active Power: Not Reported 00:20:28.718 Non-Operational Permissive Mode: Not Supported 00:20:28.718 00:20:28.718 Health Information 00:20:28.718 ================== 00:20:28.718 Critical Warnings: 00:20:28.718 Available Spare Space: OK 00:20:28.718 Temperature: OK 00:20:28.718 Device Reliability: OK 00:20:28.718 Read Only: No 00:20:28.718 Volatile Memory Backup: OK 00:20:28.718 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:28.718 Temperature Threshold: [2024-12-15 19:41:15.389065] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.718 [2024-12-15 19:41:15.389073] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.718 [2024-12-15 19:41:15.389077] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1ac5540) 00:20:28.718 [2024-12-15 19:41:15.389085] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.718 [2024-12-15 19:41:15.389112] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afebc0, cid 7, qid 0 00:20:28.718 [2024-12-15 19:41:15.389213] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.718 [2024-12-15 19:41:15.389220] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.718 [2024-12-15 19:41:15.389223] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.718 [2024-12-15 19:41:15.389227] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afebc0) on tqpair=0x1ac5540 00:20:28.718 [2024-12-15 19:41:15.389262] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:28.718 [2024-12-15 19:41:15.389274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.718 [2024-12-15 19:41:15.389281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.718 [2024-12-15 19:41:15.389287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.718 [2024-12-15 19:41:15.389293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.718 [2024-12-15 19:41:15.389301] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.718 [2024-12-15 19:41:15.389305] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.718 [2024-12-15 19:41:15.389308] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5540) 00:20:28.718 [2024-12-15 19:41:15.389316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.718 [2024-12-15 19:41:15.389339] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe640, cid 3, qid 0 00:20:28.718 [2024-12-15 19:41:15.389426] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.718 [2024-12-15 19:41:15.389433] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.718 [2024-12-15 19:41:15.389436] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.718 [2024-12-15 19:41:15.389440] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe640) on tqpair=0x1ac5540 00:20:28.718 [2024-12-15 19:41:15.389448] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.718 [2024-12-15 19:41:15.389452] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.718 [2024-12-15 19:41:15.389455] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5540) 00:20:28.718 [2024-12-15 19:41:15.389462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.719 [2024-12-15 19:41:15.389484] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe640, cid 3, qid 0 00:20:28.719 [2024-12-15 19:41:15.389588] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.719 [2024-12-15 19:41:15.389595] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.719 [2024-12-15 19:41:15.389598] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.719 [2024-12-15 19:41:15.389602] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe640) on tqpair=0x1ac5540 00:20:28.719 [2024-12-15 19:41:15.389607] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:28.719 [2024-12-15 19:41:15.389612] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:28.719 [2024-12-15 19:41:15.389621] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.719 [2024-12-15 19:41:15.389625] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.719 [2024-12-15 19:41:15.389629] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5540) 00:20:28.719 [2024-12-15 19:41:15.389636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.719 [2024-12-15 19:41:15.389654] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe640, cid 3, qid 0 00:20:28.719 [2024-12-15 19:41:15.389720] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.719 [2024-12-15 19:41:15.389726] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.719 [2024-12-15 19:41:15.389729] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.719 [2024-12-15 19:41:15.389733] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe640) on tqpair=0x1ac5540 00:20:28.719 [2024-12-15 19:41:15.389744] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.719 [2024-12-15 19:41:15.389748] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.719 [2024-12-15 19:41:15.389752] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5540) 00:20:28.719 [2024-12-15 19:41:15.389759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.719 [2024-12-15 19:41:15.389777] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe640, cid 3, qid 0 00:20:28.719 [2024-12-15 19:41:15.389877] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.719 [2024-12-15 19:41:15.389886] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.719 [2024-12-15 19:41:15.389889] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.719 [2024-12-15 19:41:15.389893] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe640) on tqpair=0x1ac5540 00:20:28.719 [2024-12-15 19:41:15.389905] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.719 [2024-12-15 19:41:15.389909] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.719 [2024-12-15 19:41:15.389913] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5540) 00:20:28.719 [2024-12-15 19:41:15.389920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.719 [2024-12-15 19:41:15.389947] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe640, cid 3, qid 0 00:20:28.719 [2024-12-15 19:41:15.390034] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.719 [2024-12-15 19:41:15.390040] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.719 [2024-12-15 19:41:15.390043] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.719 [2024-12-15 19:41:15.390047] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe640) on tqpair=0x1ac5540 00:20:28.719 [2024-12-15 19:41:15.390058] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.719 [2024-12-15 19:41:15.390062] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.719 [2024-12-15 19:41:15.390066] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5540) 00:20:28.719 [2024-12-15 19:41:15.390073] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.719 [2024-12-15 19:41:15.390092] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe640, cid 3, qid 0 00:20:28.719 [2024-12-15 19:41:15.390163] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.719 [2024-12-15 19:41:15.390169] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.719 [2024-12-15 19:41:15.390172] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.719 [2024-12-15 19:41:15.390177] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe640) on tqpair=0x1ac5540 00:20:28.719 [2024-12-15 19:41:15.390187] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.719 [2024-12-15 19:41:15.390192] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.719 [2024-12-15 19:41:15.390195] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5540) 00:20:28.719 [2024-12-15 19:41:15.390202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.719 [2024-12-15 19:41:15.390220] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe640, cid 3, qid 0 00:20:28.719 [2024-12-15 19:41:15.390345] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.719 [2024-12-15 19:41:15.390361] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.719 [2024-12-15 19:41:15.390365] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.719 [2024-12-15 19:41:15.390369] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe640) on tqpair=0x1ac5540 00:20:28.719 [2024-12-15 19:41:15.390380] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.719 [2024-12-15 19:41:15.390384] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.719 [2024-12-15 19:41:15.390387] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5540) 00:20:28.719 [2024-12-15 19:41:15.390394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.719 [2024-12-15 19:41:15.390425] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe640, cid 3, qid 0 00:20:28.719 [2024-12-15 19:41:15.390494] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.719 [2024-12-15 19:41:15.390500] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.719 [2024-12-15 19:41:15.390503] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.719 [2024-12-15 19:41:15.390507] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe640) on tqpair=0x1ac5540 00:20:28.719 [2024-12-15 19:41:15.390518] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.719 [2024-12-15 19:41:15.390522] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.719 [2024-12-15 19:41:15.390526] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5540) 00:20:28.719 [2024-12-15 19:41:15.390533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.719 [2024-12-15 19:41:15.390552] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe640, cid 3, qid 0 00:20:28.719 [2024-12-15 19:41:15.390657] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.719 [2024-12-15 19:41:15.390663] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.719 [2024-12-15 19:41:15.390666] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.719 [2024-12-15 19:41:15.390670] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe640) on tqpair=0x1ac5540 00:20:28.719 [2024-12-15 19:41:15.390681] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.719 [2024-12-15 19:41:15.390685] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.719 [2024-12-15 19:41:15.390689] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5540) 00:20:28.719 [2024-12-15 19:41:15.390696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.719 [2024-12-15 19:41:15.390714] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe640, cid 3, qid 0 00:20:28.719 [2024-12-15 19:41:15.390845] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.719 [2024-12-15 19:41:15.390854] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.719 [2024-12-15 19:41:15.390857] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.719 [2024-12-15 19:41:15.390861] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe640) on tqpair=0x1ac5540 00:20:28.719 [2024-12-15 19:41:15.390872] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.719 [2024-12-15 19:41:15.390877] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.719 [2024-12-15 19:41:15.390880] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5540) 00:20:28.719 [2024-12-15 19:41:15.390888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.719 [2024-12-15 19:41:15.390908] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe640, cid 3, qid 0 00:20:28.719 [2024-12-15 19:41:15.390987] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.719 [2024-12-15 19:41:15.390993] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.719 [2024-12-15 19:41:15.390997] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.719 [2024-12-15 19:41:15.391001] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe640) on tqpair=0x1ac5540 00:20:28.719 [2024-12-15 19:41:15.391011] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.719 [2024-12-15 19:41:15.391015] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.719 [2024-12-15 19:41:15.391019] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5540) 00:20:28.719 [2024-12-15 19:41:15.391026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.719 [2024-12-15 19:41:15.391045] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe640, cid 3, qid 0 00:20:28.719 [2024-12-15 19:41:15.391104] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.719 [2024-12-15 19:41:15.391115] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.719 [2024-12-15 19:41:15.391120] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.719 [2024-12-15 19:41:15.391124] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe640) on tqpair=0x1ac5540 00:20:28.719 [2024-12-15 19:41:15.391135] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.719 [2024-12-15 19:41:15.391139] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.719 [2024-12-15 19:41:15.391143] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5540) 00:20:28.719 [2024-12-15 19:41:15.391150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.719 [2024-12-15 19:41:15.391170] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe640, cid 3, qid 0 00:20:28.719 [2024-12-15 19:41:15.391233] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.719 [2024-12-15 19:41:15.391239] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.719 [2024-12-15 19:41:15.391242] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.719 [2024-12-15 19:41:15.391246] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe640) on tqpair=0x1ac5540 00:20:28.719 [2024-12-15 19:41:15.391257] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.719 [2024-12-15 19:41:15.391261] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.719 [2024-12-15 19:41:15.391265] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5540) 00:20:28.719 [2024-12-15 19:41:15.391272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.720 [2024-12-15 19:41:15.391290] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe640, cid 3, qid 0 00:20:28.720 [2024-12-15 19:41:15.391367] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.720 [2024-12-15 19:41:15.391383] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.720 [2024-12-15 19:41:15.391387] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.720 [2024-12-15 19:41:15.391390] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe640) on tqpair=0x1ac5540 00:20:28.720 [2024-12-15 19:41:15.391401] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.720 [2024-12-15 19:41:15.391405] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.720 [2024-12-15 19:41:15.391409] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5540) 00:20:28.720 [2024-12-15 19:41:15.391416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.720 [2024-12-15 19:41:15.391434] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe640, cid 3, qid 0 00:20:28.720 [2024-12-15 19:41:15.391504] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.720 [2024-12-15 19:41:15.391510] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.720 [2024-12-15 19:41:15.391514] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.720 [2024-12-15 19:41:15.391518] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe640) on tqpair=0x1ac5540 00:20:28.720 [2024-12-15 19:41:15.391530] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.720 [2024-12-15 19:41:15.391534] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.720 [2024-12-15 19:41:15.391538] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5540) 00:20:28.720 [2024-12-15 19:41:15.391545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.720 [2024-12-15 19:41:15.391576] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe640, cid 3, qid 0 00:20:28.720 [2024-12-15 19:41:15.391633] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.720 [2024-12-15 19:41:15.391644] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.720 [2024-12-15 19:41:15.391648] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.720 [2024-12-15 19:41:15.391652] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe640) on tqpair=0x1ac5540 00:20:28.720 [2024-12-15 19:41:15.391663] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.720 [2024-12-15 19:41:15.391667] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.720 [2024-12-15 19:41:15.391671] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5540) 00:20:28.720 [2024-12-15 19:41:15.391678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.720 [2024-12-15 19:41:15.391697] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe640, cid 3, qid 0 00:20:28.720 [2024-12-15 19:41:15.391787] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.720 [2024-12-15 19:41:15.391794] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.720 [2024-12-15 19:41:15.391797] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.720 [2024-12-15 19:41:15.391812] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe640) on tqpair=0x1ac5540 00:20:28.720 [2024-12-15 19:41:15.391833] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.720 [2024-12-15 19:41:15.391839] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.720 [2024-12-15 19:41:15.391842] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5540) 00:20:28.720 [2024-12-15 19:41:15.391850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.720 [2024-12-15 19:41:15.391869] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe640, cid 3, qid 0 00:20:28.720 [2024-12-15 19:41:15.391929] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.720 [2024-12-15 19:41:15.391935] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.720 [2024-12-15 19:41:15.391938] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.720 [2024-12-15 19:41:15.391942] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe640) on tqpair=0x1ac5540 00:20:28.720 [2024-12-15 19:41:15.391953] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.720 [2024-12-15 19:41:15.391957] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.720 [2024-12-15 19:41:15.391961] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5540) 00:20:28.720 [2024-12-15 19:41:15.391969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.720 [2024-12-15 19:41:15.391987] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe640, cid 3, qid 0 00:20:28.720 [2024-12-15 19:41:15.392072] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.720 [2024-12-15 19:41:15.392079] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.720 [2024-12-15 19:41:15.392082] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.720 [2024-12-15 19:41:15.392086] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe640) on tqpair=0x1ac5540 00:20:28.720 [2024-12-15 19:41:15.392096] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.720 [2024-12-15 19:41:15.392101] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.720 [2024-12-15 19:41:15.392104] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5540) 00:20:28.720 [2024-12-15 19:41:15.392111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.720 [2024-12-15 19:41:15.392130] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe640, cid 3, qid 0 00:20:28.720 [2024-12-15 19:41:15.392206] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.720 [2024-12-15 19:41:15.392213] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.720 [2024-12-15 19:41:15.392226] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.720 [2024-12-15 19:41:15.392229] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe640) on tqpair=0x1ac5540 00:20:28.720 [2024-12-15 19:41:15.392240] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.720 [2024-12-15 19:41:15.392244] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.720 [2024-12-15 19:41:15.392247] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5540) 00:20:28.720 [2024-12-15 19:41:15.392254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.720 [2024-12-15 19:41:15.392272] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe640, cid 3, qid 0 00:20:28.720 [2024-12-15 19:41:15.392370] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.720 [2024-12-15 19:41:15.392389] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.720 [2024-12-15 19:41:15.392393] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.720 [2024-12-15 19:41:15.392407] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe640) on tqpair=0x1ac5540 00:20:28.720 [2024-12-15 19:41:15.392418] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.720 [2024-12-15 19:41:15.392423] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.720 [2024-12-15 19:41:15.392426] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5540) 00:20:28.720 [2024-12-15 19:41:15.392444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.720 [2024-12-15 19:41:15.392464] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe640, cid 3, qid 0 00:20:28.720 [2024-12-15 19:41:15.392527] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.720 [2024-12-15 19:41:15.392533] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.720 [2024-12-15 19:41:15.392537] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.720 [2024-12-15 19:41:15.392541] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe640) on tqpair=0x1ac5540 00:20:28.720 [2024-12-15 19:41:15.392551] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.720 [2024-12-15 19:41:15.392555] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.720 [2024-12-15 19:41:15.392559] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5540) 00:20:28.720 [2024-12-15 19:41:15.392566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.720 [2024-12-15 19:41:15.392593] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe640, cid 3, qid 0 00:20:28.720 [2024-12-15 19:41:15.392679] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.720 [2024-12-15 19:41:15.392690] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.720 [2024-12-15 19:41:15.392694] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.720 [2024-12-15 19:41:15.392698] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe640) on tqpair=0x1ac5540 00:20:28.720 [2024-12-15 19:41:15.392709] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.720 [2024-12-15 19:41:15.392714] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.720 [2024-12-15 19:41:15.392717] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5540) 00:20:28.720 [2024-12-15 19:41:15.392724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.720 [2024-12-15 19:41:15.392744] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe640, cid 3, qid 0 00:20:28.720 [2024-12-15 19:41:15.392812] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.720 [2024-12-15 19:41:15.396907] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.720 [2024-12-15 19:41:15.396942] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.720 [2024-12-15 19:41:15.396946] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe640) on tqpair=0x1ac5540 00:20:28.720 [2024-12-15 19:41:15.396973] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:28.720 [2024-12-15 19:41:15.396978] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:28.720 [2024-12-15 19:41:15.396982] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5540) 00:20:28.720 [2024-12-15 19:41:15.397007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.720 [2024-12-15 19:41:15.397033] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe640, cid 3, qid 0 00:20:28.720 [2024-12-15 19:41:15.397108] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:28.720 [2024-12-15 19:41:15.397115] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:28.720 [2024-12-15 19:41:15.397118] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:28.720 [2024-12-15 19:41:15.397122] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1afe640) on tqpair=0x1ac5540 00:20:28.720 [2024-12-15 19:41:15.397143] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:20:28.720 0 Kelvin (-273 Celsius) 00:20:28.720 Available Spare: 0% 00:20:28.720 Available Spare Threshold: 0% 00:20:28.720 Life Percentage Used: 0% 00:20:28.720 Data Units Read: 0 00:20:28.720 Data Units Written: 0 00:20:28.720 Host Read Commands: 0 00:20:28.720 Host Write Commands: 0 00:20:28.720 Controller Busy Time: 0 minutes 00:20:28.720 Power Cycles: 0 00:20:28.721 Power On Hours: 0 hours 00:20:28.721 Unsafe Shutdowns: 0 00:20:28.721 Unrecoverable Media Errors: 0 00:20:28.721 Lifetime Error Log Entries: 0 00:20:28.721 Warning Temperature Time: 0 minutes 00:20:28.721 Critical Temperature Time: 0 minutes 00:20:28.721 00:20:28.721 Number of Queues 00:20:28.721 ================ 00:20:28.721 Number of I/O Submission Queues: 127 00:20:28.721 Number of I/O Completion Queues: 127 00:20:28.721 00:20:28.721 Active Namespaces 00:20:28.721 ================= 00:20:28.721 Namespace ID:1 00:20:28.721 Error Recovery Timeout: Unlimited 00:20:28.721 Command Set Identifier: NVM (00h) 00:20:28.721 Deallocate: Supported 00:20:28.721 Deallocated/Unwritten Error: Not Supported 00:20:28.721 Deallocated Read Value: Unknown 00:20:28.721 Deallocate in Write Zeroes: Not Supported 00:20:28.721 Deallocated Guard Field: 0xFFFF 00:20:28.721 Flush: Supported 00:20:28.721 Reservation: Supported 00:20:28.721 Namespace Sharing Capabilities: Multiple Controllers 00:20:28.721 Size (in LBAs): 131072 (0GiB) 00:20:28.721 Capacity (in LBAs): 131072 (0GiB) 00:20:28.721 Utilization (in LBAs): 131072 (0GiB) 00:20:28.721 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:28.721 EUI64: ABCDEF0123456789 00:20:28.721 UUID: ec4fe6be-c3af-49ea-962f-c95daf929232 00:20:28.721 Thin Provisioning: Not Supported 00:20:28.721 Per-NS Atomic Units: Yes 00:20:28.721 Atomic Boundary Size (Normal): 0 00:20:28.721 Atomic Boundary Size (PFail): 0 00:20:28.721 Atomic Boundary Offset: 0 00:20:28.721 Maximum Single Source Range Length: 65535 00:20:28.721 Maximum Copy Length: 65535 00:20:28.721 Maximum Source Range Count: 1 00:20:28.721 NGUID/EUI64 Never Reused: No 00:20:28.721 Namespace Write Protected: No 00:20:28.721 Number of LBA Formats: 1 00:20:28.721 Current LBA Format: LBA Format #00 00:20:28.721 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:28.721 00:20:28.721 19:41:15 -- host/identify.sh@51 -- # sync 00:20:28.721 19:41:15 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:28.721 19:41:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.721 19:41:15 -- common/autotest_common.sh@10 -- # set +x 00:20:28.721 19:41:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.721 19:41:15 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:28.721 19:41:15 -- host/identify.sh@56 -- # nvmftestfini 00:20:28.721 19:41:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:28.721 19:41:15 -- nvmf/common.sh@116 -- # sync 00:20:28.721 19:41:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:28.721 19:41:15 -- nvmf/common.sh@119 -- # set +e 00:20:28.721 19:41:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:28.721 19:41:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:28.721 rmmod nvme_tcp 00:20:28.721 rmmod nvme_fabrics 00:20:28.721 rmmod nvme_keyring 00:20:28.721 19:41:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:28.721 19:41:15 -- nvmf/common.sh@123 -- # set -e 00:20:28.721 19:41:15 -- nvmf/common.sh@124 -- # return 0 00:20:28.721 19:41:15 -- nvmf/common.sh@477 -- # '[' -n 93386 ']' 00:20:28.721 19:41:15 -- nvmf/common.sh@478 -- # killprocess 93386 00:20:28.721 19:41:15 -- common/autotest_common.sh@936 -- # '[' -z 93386 ']' 00:20:28.721 19:41:15 -- common/autotest_common.sh@940 -- # kill -0 93386 00:20:28.721 19:41:15 -- common/autotest_common.sh@941 -- # uname 00:20:28.721 19:41:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:28.721 19:41:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93386 00:20:28.980 19:41:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:28.980 19:41:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:28.980 killing process with pid 93386 00:20:28.980 19:41:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93386' 00:20:28.980 19:41:15 -- common/autotest_common.sh@955 -- # kill 93386 00:20:28.980 [2024-12-15 19:41:15.612766] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:28.980 19:41:15 -- common/autotest_common.sh@960 -- # wait 93386 00:20:29.239 19:41:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:29.239 19:41:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:29.239 19:41:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:29.239 19:41:15 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:29.239 19:41:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:29.239 19:41:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.239 19:41:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:29.239 19:41:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.239 19:41:15 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:29.239 00:20:29.239 real 0m2.877s 00:20:29.239 user 0m7.861s 00:20:29.239 sys 0m0.781s 00:20:29.239 19:41:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:29.239 19:41:15 -- common/autotest_common.sh@10 -- # set +x 00:20:29.239 ************************************ 00:20:29.239 END TEST nvmf_identify 00:20:29.239 ************************************ 00:20:29.239 19:41:16 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:29.239 19:41:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:29.239 19:41:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:29.239 19:41:16 -- common/autotest_common.sh@10 -- # set +x 00:20:29.239 ************************************ 00:20:29.239 START TEST nvmf_perf 00:20:29.239 ************************************ 00:20:29.239 19:41:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:29.239 * Looking for test storage... 00:20:29.239 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:29.239 19:41:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:29.239 19:41:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:29.239 19:41:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:29.499 19:41:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:29.499 19:41:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:29.499 19:41:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:29.499 19:41:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:29.499 19:41:16 -- scripts/common.sh@335 -- # IFS=.-: 00:20:29.499 19:41:16 -- scripts/common.sh@335 -- # read -ra ver1 00:20:29.499 19:41:16 -- scripts/common.sh@336 -- # IFS=.-: 00:20:29.499 19:41:16 -- scripts/common.sh@336 -- # read -ra ver2 00:20:29.499 19:41:16 -- scripts/common.sh@337 -- # local 'op=<' 00:20:29.499 19:41:16 -- scripts/common.sh@339 -- # ver1_l=2 00:20:29.499 19:41:16 -- scripts/common.sh@340 -- # ver2_l=1 00:20:29.499 19:41:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:29.499 19:41:16 -- scripts/common.sh@343 -- # case "$op" in 00:20:29.499 19:41:16 -- scripts/common.sh@344 -- # : 1 00:20:29.499 19:41:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:29.499 19:41:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:29.499 19:41:16 -- scripts/common.sh@364 -- # decimal 1 00:20:29.499 19:41:16 -- scripts/common.sh@352 -- # local d=1 00:20:29.499 19:41:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:29.499 19:41:16 -- scripts/common.sh@354 -- # echo 1 00:20:29.499 19:41:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:29.499 19:41:16 -- scripts/common.sh@365 -- # decimal 2 00:20:29.499 19:41:16 -- scripts/common.sh@352 -- # local d=2 00:20:29.499 19:41:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:29.499 19:41:16 -- scripts/common.sh@354 -- # echo 2 00:20:29.499 19:41:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:29.499 19:41:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:29.499 19:41:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:29.499 19:41:16 -- scripts/common.sh@367 -- # return 0 00:20:29.499 19:41:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:29.499 19:41:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:29.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.499 --rc genhtml_branch_coverage=1 00:20:29.499 --rc genhtml_function_coverage=1 00:20:29.499 --rc genhtml_legend=1 00:20:29.499 --rc geninfo_all_blocks=1 00:20:29.499 --rc geninfo_unexecuted_blocks=1 00:20:29.499 00:20:29.499 ' 00:20:29.499 19:41:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:29.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.499 --rc genhtml_branch_coverage=1 00:20:29.499 --rc genhtml_function_coverage=1 00:20:29.499 --rc genhtml_legend=1 00:20:29.499 --rc geninfo_all_blocks=1 00:20:29.499 --rc geninfo_unexecuted_blocks=1 00:20:29.499 00:20:29.499 ' 00:20:29.499 19:41:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:29.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.499 --rc genhtml_branch_coverage=1 00:20:29.499 --rc genhtml_function_coverage=1 00:20:29.499 --rc genhtml_legend=1 00:20:29.499 --rc geninfo_all_blocks=1 00:20:29.499 --rc geninfo_unexecuted_blocks=1 00:20:29.499 00:20:29.499 ' 00:20:29.499 19:41:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:29.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.499 --rc genhtml_branch_coverage=1 00:20:29.499 --rc genhtml_function_coverage=1 00:20:29.499 --rc genhtml_legend=1 00:20:29.499 --rc geninfo_all_blocks=1 00:20:29.499 --rc geninfo_unexecuted_blocks=1 00:20:29.499 00:20:29.499 ' 00:20:29.499 19:41:16 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:29.499 19:41:16 -- nvmf/common.sh@7 -- # uname -s 00:20:29.499 19:41:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:29.499 19:41:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:29.499 19:41:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:29.499 19:41:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:29.499 19:41:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:29.499 19:41:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:29.499 19:41:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:29.499 19:41:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:29.499 19:41:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:29.499 19:41:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:29.499 19:41:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:20:29.499 19:41:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:20:29.499 19:41:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:29.499 19:41:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:29.499 19:41:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:29.499 19:41:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:29.499 19:41:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:29.499 19:41:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:29.499 19:41:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:29.499 19:41:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.499 19:41:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.499 19:41:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.499 19:41:16 -- paths/export.sh@5 -- # export PATH 00:20:29.499 19:41:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.499 19:41:16 -- nvmf/common.sh@46 -- # : 0 00:20:29.499 19:41:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:29.499 19:41:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:29.499 19:41:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:29.499 19:41:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:29.499 19:41:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:29.499 19:41:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:29.499 19:41:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:29.499 19:41:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:29.499 19:41:16 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:29.499 19:41:16 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:29.499 19:41:16 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:29.499 19:41:16 -- host/perf.sh@17 -- # nvmftestinit 00:20:29.499 19:41:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:29.499 19:41:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:29.499 19:41:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:29.499 19:41:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:29.499 19:41:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:29.499 19:41:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.499 19:41:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:29.499 19:41:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.499 19:41:16 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:29.499 19:41:16 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:29.499 19:41:16 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:29.499 19:41:16 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:29.499 19:41:16 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:29.499 19:41:16 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:29.499 19:41:16 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:29.499 19:41:16 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:29.499 19:41:16 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:29.499 19:41:16 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:29.499 19:41:16 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:29.499 19:41:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:29.499 19:41:16 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:29.499 19:41:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:29.499 19:41:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:29.499 19:41:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:29.499 19:41:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:29.499 19:41:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:29.499 19:41:16 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:29.500 19:41:16 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:29.500 Cannot find device "nvmf_tgt_br" 00:20:29.500 19:41:16 -- nvmf/common.sh@154 -- # true 00:20:29.500 19:41:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:29.500 Cannot find device "nvmf_tgt_br2" 00:20:29.500 19:41:16 -- nvmf/common.sh@155 -- # true 00:20:29.500 19:41:16 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:29.500 19:41:16 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:29.500 Cannot find device "nvmf_tgt_br" 00:20:29.500 19:41:16 -- nvmf/common.sh@157 -- # true 00:20:29.500 19:41:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:29.500 Cannot find device "nvmf_tgt_br2" 00:20:29.500 19:41:16 -- nvmf/common.sh@158 -- # true 00:20:29.500 19:41:16 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:29.500 19:41:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:29.759 19:41:16 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:29.759 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:29.759 19:41:16 -- nvmf/common.sh@161 -- # true 00:20:29.759 19:41:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:29.759 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:29.759 19:41:16 -- nvmf/common.sh@162 -- # true 00:20:29.759 19:41:16 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:29.759 19:41:16 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:29.759 19:41:16 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:29.759 19:41:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:29.759 19:41:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:29.759 19:41:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:29.759 19:41:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:29.759 19:41:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:29.759 19:41:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:29.759 19:41:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:29.759 19:41:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:29.759 19:41:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:29.759 19:41:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:29.759 19:41:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:29.759 19:41:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:29.759 19:41:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:29.759 19:41:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:29.759 19:41:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:29.759 19:41:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:29.759 19:41:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:29.759 19:41:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:29.759 19:41:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:29.759 19:41:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:29.759 19:41:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:29.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:29.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:20:29.759 00:20:29.759 --- 10.0.0.2 ping statistics --- 00:20:29.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.759 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:20:29.759 19:41:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:29.759 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:29.759 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:20:29.759 00:20:29.759 --- 10.0.0.3 ping statistics --- 00:20:29.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.759 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:20:29.759 19:41:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:29.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:29.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:20:29.759 00:20:29.759 --- 10.0.0.1 ping statistics --- 00:20:29.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.759 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:20:29.759 19:41:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:29.759 19:41:16 -- nvmf/common.sh@421 -- # return 0 00:20:29.759 19:41:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:29.759 19:41:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:29.759 19:41:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:29.759 19:41:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:29.759 19:41:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:29.759 19:41:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:29.759 19:41:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:29.759 19:41:16 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:29.759 19:41:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:29.759 19:41:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:29.759 19:41:16 -- common/autotest_common.sh@10 -- # set +x 00:20:29.759 19:41:16 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:29.759 19:41:16 -- nvmf/common.sh@469 -- # nvmfpid=93618 00:20:29.759 19:41:16 -- nvmf/common.sh@470 -- # waitforlisten 93618 00:20:29.759 19:41:16 -- common/autotest_common.sh@829 -- # '[' -z 93618 ']' 00:20:29.759 19:41:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.759 19:41:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:29.759 19:41:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.759 19:41:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:29.759 19:41:16 -- common/autotest_common.sh@10 -- # set +x 00:20:30.018 [2024-12-15 19:41:16.686835] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:30.018 [2024-12-15 19:41:16.686934] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:30.018 [2024-12-15 19:41:16.812057] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:30.018 [2024-12-15 19:41:16.889749] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:30.018 [2024-12-15 19:41:16.889936] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:30.018 [2024-12-15 19:41:16.889950] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:30.018 [2024-12-15 19:41:16.889959] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:30.018 [2024-12-15 19:41:16.890165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:30.018 [2024-12-15 19:41:16.890362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:30.018 [2024-12-15 19:41:16.891039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:30.018 [2024-12-15 19:41:16.891051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.955 19:41:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:30.955 19:41:17 -- common/autotest_common.sh@862 -- # return 0 00:20:30.955 19:41:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:30.955 19:41:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:30.955 19:41:17 -- common/autotest_common.sh@10 -- # set +x 00:20:30.955 19:41:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:30.955 19:41:17 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:30.955 19:41:17 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:20:31.522 19:41:18 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:31.522 19:41:18 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:20:31.781 19:41:18 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:20:31.781 19:41:18 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:32.040 19:41:18 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:32.040 19:41:18 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:20:32.040 19:41:18 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:32.040 19:41:18 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:32.040 19:41:18 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:32.299 [2024-12-15 19:41:19.047592] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:32.299 19:41:19 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:32.558 19:41:19 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:32.558 19:41:19 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:32.817 19:41:19 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:32.817 19:41:19 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:33.076 19:41:19 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:33.076 [2024-12-15 19:41:19.968813] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:33.335 19:41:19 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:33.594 19:41:20 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:20:33.594 19:41:20 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:20:33.594 19:41:20 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:33.594 19:41:20 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:20:34.531 Initializing NVMe Controllers 00:20:34.531 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:20:34.531 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:20:34.531 Initialization complete. Launching workers. 00:20:34.531 ======================================================== 00:20:34.531 Latency(us) 00:20:34.531 Device Information : IOPS MiB/s Average min max 00:20:34.531 PCIE (0000:00:06.0) NSID 1 from core 0: 20461.22 79.93 1563.90 339.67 8195.55 00:20:34.531 ======================================================== 00:20:34.531 Total : 20461.22 79.93 1563.90 339.67 8195.55 00:20:34.531 00:20:34.531 19:41:21 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:35.908 Initializing NVMe Controllers 00:20:35.908 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:35.908 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:35.908 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:35.908 Initialization complete. Launching workers. 00:20:35.908 ======================================================== 00:20:35.908 Latency(us) 00:20:35.908 Device Information : IOPS MiB/s Average min max 00:20:35.908 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2873.24 11.22 347.77 98.86 7190.15 00:20:35.908 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.62 0.48 8152.64 7010.02 12035.22 00:20:35.908 ======================================================== 00:20:35.908 Total : 2996.86 11.71 669.73 98.86 12035.22 00:20:35.908 00:20:35.908 19:41:22 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:37.286 Initializing NVMe Controllers 00:20:37.286 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:37.287 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:37.287 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:37.287 Initialization complete. Launching workers. 00:20:37.287 ======================================================== 00:20:37.287 Latency(us) 00:20:37.287 Device Information : IOPS MiB/s Average min max 00:20:37.287 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8126.66 31.74 3937.51 620.54 7881.64 00:20:37.287 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2623.14 10.25 12234.06 6342.77 20144.11 00:20:37.287 ======================================================== 00:20:37.287 Total : 10749.80 41.99 5962.01 620.54 20144.11 00:20:37.287 00:20:37.287 19:41:24 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:20:37.287 19:41:24 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:39.822 Initializing NVMe Controllers 00:20:39.822 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:39.822 Controller IO queue size 128, less than required. 00:20:39.822 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:39.822 Controller IO queue size 128, less than required. 00:20:39.822 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:39.822 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:39.822 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:39.822 Initialization complete. Launching workers. 00:20:39.822 ======================================================== 00:20:39.822 Latency(us) 00:20:39.822 Device Information : IOPS MiB/s Average min max 00:20:39.822 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1177.97 294.49 111600.84 74225.37 191479.70 00:20:39.822 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 583.98 146.00 225523.83 90805.99 339884.51 00:20:39.822 ======================================================== 00:20:39.822 Total : 1761.95 440.49 149359.65 74225.37 339884.51 00:20:39.822 00:20:39.822 19:41:26 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:40.080 No valid NVMe controllers or AIO or URING devices found 00:20:40.080 Initializing NVMe Controllers 00:20:40.080 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:40.080 Controller IO queue size 128, less than required. 00:20:40.080 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:40.080 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:40.080 Controller IO queue size 128, less than required. 00:20:40.080 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:40.080 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:20:40.080 WARNING: Some requested NVMe devices were skipped 00:20:40.080 19:41:26 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:42.677 Initializing NVMe Controllers 00:20:42.677 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:42.677 Controller IO queue size 128, less than required. 00:20:42.677 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:42.677 Controller IO queue size 128, less than required. 00:20:42.677 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:42.677 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:42.677 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:42.677 Initialization complete. Launching workers. 00:20:42.677 00:20:42.677 ==================== 00:20:42.677 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:42.677 TCP transport: 00:20:42.677 polls: 8772 00:20:42.677 idle_polls: 4665 00:20:42.677 sock_completions: 4107 00:20:42.677 nvme_completions: 2529 00:20:42.677 submitted_requests: 3921 00:20:42.677 queued_requests: 1 00:20:42.677 00:20:42.677 ==================== 00:20:42.677 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:42.677 TCP transport: 00:20:42.677 polls: 11611 00:20:42.677 idle_polls: 8137 00:20:42.677 sock_completions: 3474 00:20:42.677 nvme_completions: 6552 00:20:42.677 submitted_requests: 9932 00:20:42.677 queued_requests: 1 00:20:42.677 ======================================================== 00:20:42.677 Latency(us) 00:20:42.677 Device Information : IOPS MiB/s Average min max 00:20:42.677 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 695.70 173.93 191074.69 123579.33 330587.58 00:20:42.677 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1700.78 425.19 76245.60 40166.42 126655.22 00:20:42.677 ======================================================== 00:20:42.677 Total : 2396.48 599.12 109580.76 40166.42 330587.58 00:20:42.677 00:20:42.677 19:41:29 -- host/perf.sh@66 -- # sync 00:20:42.677 19:41:29 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:42.935 19:41:29 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:20:42.935 19:41:29 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:20:42.935 19:41:29 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:20:43.193 19:41:30 -- host/perf.sh@72 -- # ls_guid=72f13aa1-5677-46fc-a7b2-134f417138ac 00:20:43.193 19:41:30 -- host/perf.sh@73 -- # get_lvs_free_mb 72f13aa1-5677-46fc-a7b2-134f417138ac 00:20:43.193 19:41:30 -- common/autotest_common.sh@1353 -- # local lvs_uuid=72f13aa1-5677-46fc-a7b2-134f417138ac 00:20:43.193 19:41:30 -- common/autotest_common.sh@1354 -- # local lvs_info 00:20:43.193 19:41:30 -- common/autotest_common.sh@1355 -- # local fc 00:20:43.193 19:41:30 -- common/autotest_common.sh@1356 -- # local cs 00:20:43.193 19:41:30 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:43.451 19:41:30 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:20:43.451 { 00:20:43.451 "base_bdev": "Nvme0n1", 00:20:43.451 "block_size": 4096, 00:20:43.451 "cluster_size": 4194304, 00:20:43.451 "free_clusters": 1278, 00:20:43.451 "name": "lvs_0", 00:20:43.451 "total_data_clusters": 1278, 00:20:43.451 "uuid": "72f13aa1-5677-46fc-a7b2-134f417138ac" 00:20:43.451 } 00:20:43.451 ]' 00:20:43.451 19:41:30 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="72f13aa1-5677-46fc-a7b2-134f417138ac") .free_clusters' 00:20:43.451 19:41:30 -- common/autotest_common.sh@1358 -- # fc=1278 00:20:43.451 19:41:30 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="72f13aa1-5677-46fc-a7b2-134f417138ac") .cluster_size' 00:20:43.710 5112 00:20:43.710 19:41:30 -- common/autotest_common.sh@1359 -- # cs=4194304 00:20:43.710 19:41:30 -- common/autotest_common.sh@1362 -- # free_mb=5112 00:20:43.710 19:41:30 -- common/autotest_common.sh@1363 -- # echo 5112 00:20:43.710 19:41:30 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:20:43.710 19:41:30 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 72f13aa1-5677-46fc-a7b2-134f417138ac lbd_0 5112 00:20:43.968 19:41:30 -- host/perf.sh@80 -- # lb_guid=667a81e0-ecf6-4c6a-a388-e6514b9c66e8 00:20:43.968 19:41:30 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 667a81e0-ecf6-4c6a-a388-e6514b9c66e8 lvs_n_0 00:20:44.227 19:41:30 -- host/perf.sh@83 -- # ls_nested_guid=3a9c863b-fed0-45c1-abca-abccdbd14f3a 00:20:44.227 19:41:30 -- host/perf.sh@84 -- # get_lvs_free_mb 3a9c863b-fed0-45c1-abca-abccdbd14f3a 00:20:44.227 19:41:30 -- common/autotest_common.sh@1353 -- # local lvs_uuid=3a9c863b-fed0-45c1-abca-abccdbd14f3a 00:20:44.227 19:41:30 -- common/autotest_common.sh@1354 -- # local lvs_info 00:20:44.227 19:41:30 -- common/autotest_common.sh@1355 -- # local fc 00:20:44.227 19:41:30 -- common/autotest_common.sh@1356 -- # local cs 00:20:44.227 19:41:30 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:44.486 19:41:31 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:20:44.486 { 00:20:44.486 "base_bdev": "Nvme0n1", 00:20:44.486 "block_size": 4096, 00:20:44.486 "cluster_size": 4194304, 00:20:44.486 "free_clusters": 0, 00:20:44.486 "name": "lvs_0", 00:20:44.486 "total_data_clusters": 1278, 00:20:44.486 "uuid": "72f13aa1-5677-46fc-a7b2-134f417138ac" 00:20:44.486 }, 00:20:44.486 { 00:20:44.486 "base_bdev": "667a81e0-ecf6-4c6a-a388-e6514b9c66e8", 00:20:44.486 "block_size": 4096, 00:20:44.486 "cluster_size": 4194304, 00:20:44.486 "free_clusters": 1276, 00:20:44.486 "name": "lvs_n_0", 00:20:44.486 "total_data_clusters": 1276, 00:20:44.486 "uuid": "3a9c863b-fed0-45c1-abca-abccdbd14f3a" 00:20:44.486 } 00:20:44.486 ]' 00:20:44.486 19:41:31 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="3a9c863b-fed0-45c1-abca-abccdbd14f3a") .free_clusters' 00:20:44.486 19:41:31 -- common/autotest_common.sh@1358 -- # fc=1276 00:20:44.486 19:41:31 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="3a9c863b-fed0-45c1-abca-abccdbd14f3a") .cluster_size' 00:20:44.486 19:41:31 -- common/autotest_common.sh@1359 -- # cs=4194304 00:20:44.486 19:41:31 -- common/autotest_common.sh@1362 -- # free_mb=5104 00:20:44.486 5104 00:20:44.486 19:41:31 -- common/autotest_common.sh@1363 -- # echo 5104 00:20:44.486 19:41:31 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:20:44.486 19:41:31 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3a9c863b-fed0-45c1-abca-abccdbd14f3a lbd_nest_0 5104 00:20:44.745 19:41:31 -- host/perf.sh@88 -- # lb_nested_guid=f2d8a2ee-0028-450c-9ec7-d5b1be0acbbd 00:20:44.745 19:41:31 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:45.004 19:41:31 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:20:45.004 19:41:31 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 f2d8a2ee-0028-450c-9ec7-d5b1be0acbbd 00:20:45.262 19:41:32 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:45.521 19:41:32 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:20:45.521 19:41:32 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:20:45.521 19:41:32 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:45.521 19:41:32 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:45.521 19:41:32 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:46.088 No valid NVMe controllers or AIO or URING devices found 00:20:46.088 Initializing NVMe Controllers 00:20:46.088 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:46.088 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:46.088 WARNING: Some requested NVMe devices were skipped 00:20:46.088 19:41:32 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:46.088 19:41:32 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:56.070 Initializing NVMe Controllers 00:20:56.070 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:56.070 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:56.070 Initialization complete. Launching workers. 00:20:56.070 ======================================================== 00:20:56.070 Latency(us) 00:20:56.070 Device Information : IOPS MiB/s Average min max 00:20:56.070 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 825.20 103.15 1211.64 361.25 8084.86 00:20:56.070 ======================================================== 00:20:56.070 Total : 825.20 103.15 1211.64 361.25 8084.86 00:20:56.070 00:20:56.070 19:41:42 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:56.070 19:41:42 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:56.070 19:41:42 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:56.637 No valid NVMe controllers or AIO or URING devices found 00:20:56.637 Initializing NVMe Controllers 00:20:56.637 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:56.637 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:56.637 WARNING: Some requested NVMe devices were skipped 00:20:56.637 19:41:43 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:56.637 19:41:43 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:08.846 Initializing NVMe Controllers 00:21:08.846 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:08.846 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:08.846 Initialization complete. Launching workers. 00:21:08.846 ======================================================== 00:21:08.846 Latency(us) 00:21:08.846 Device Information : IOPS MiB/s Average min max 00:21:08.846 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1060.88 132.61 30185.00 7227.58 280955.05 00:21:08.846 ======================================================== 00:21:08.847 Total : 1060.88 132.61 30185.00 7227.58 280955.05 00:21:08.847 00:21:08.847 19:41:53 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:08.847 19:41:53 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:08.847 19:41:53 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:08.847 No valid NVMe controllers or AIO or URING devices found 00:21:08.847 Initializing NVMe Controllers 00:21:08.847 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:08.847 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:08.847 WARNING: Some requested NVMe devices were skipped 00:21:08.847 19:41:53 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:08.847 19:41:53 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:18.825 Initializing NVMe Controllers 00:21:18.825 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:18.825 Controller IO queue size 128, less than required. 00:21:18.825 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:18.825 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:18.825 Initialization complete. Launching workers. 00:21:18.825 ======================================================== 00:21:18.825 Latency(us) 00:21:18.825 Device Information : IOPS MiB/s Average min max 00:21:18.825 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3949.01 493.63 32415.46 9288.51 69865.72 00:21:18.825 ======================================================== 00:21:18.825 Total : 3949.01 493.63 32415.46 9288.51 69865.72 00:21:18.825 00:21:18.825 19:42:04 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:18.825 19:42:04 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete f2d8a2ee-0028-450c-9ec7-d5b1be0acbbd 00:21:18.825 19:42:04 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:18.825 19:42:05 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 667a81e0-ecf6-4c6a-a388-e6514b9c66e8 00:21:18.825 19:42:05 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:18.825 19:42:05 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:18.825 19:42:05 -- host/perf.sh@114 -- # nvmftestfini 00:21:18.825 19:42:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:18.825 19:42:05 -- nvmf/common.sh@116 -- # sync 00:21:18.825 19:42:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:18.825 19:42:05 -- nvmf/common.sh@119 -- # set +e 00:21:18.825 19:42:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:18.825 19:42:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:18.825 rmmod nvme_tcp 00:21:19.083 rmmod nvme_fabrics 00:21:19.083 rmmod nvme_keyring 00:21:19.083 19:42:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:19.083 19:42:05 -- nvmf/common.sh@123 -- # set -e 00:21:19.083 19:42:05 -- nvmf/common.sh@124 -- # return 0 00:21:19.083 19:42:05 -- nvmf/common.sh@477 -- # '[' -n 93618 ']' 00:21:19.083 19:42:05 -- nvmf/common.sh@478 -- # killprocess 93618 00:21:19.083 19:42:05 -- common/autotest_common.sh@936 -- # '[' -z 93618 ']' 00:21:19.083 19:42:05 -- common/autotest_common.sh@940 -- # kill -0 93618 00:21:19.083 19:42:05 -- common/autotest_common.sh@941 -- # uname 00:21:19.083 19:42:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:19.083 19:42:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93618 00:21:19.083 19:42:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:19.083 killing process with pid 93618 00:21:19.083 19:42:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:19.083 19:42:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93618' 00:21:19.083 19:42:05 -- common/autotest_common.sh@955 -- # kill 93618 00:21:19.083 19:42:05 -- common/autotest_common.sh@960 -- # wait 93618 00:21:20.464 19:42:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:20.464 19:42:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:20.464 19:42:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:20.464 19:42:07 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:20.464 19:42:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:20.464 19:42:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:20.464 19:42:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:20.464 19:42:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:20.464 19:42:07 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:20.464 ************************************ 00:21:20.464 END TEST nvmf_perf 00:21:20.464 ************************************ 00:21:20.464 00:21:20.464 real 0m51.225s 00:21:20.464 user 3m13.874s 00:21:20.464 sys 0m10.696s 00:21:20.464 19:42:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:20.464 19:42:07 -- common/autotest_common.sh@10 -- # set +x 00:21:20.464 19:42:07 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:20.464 19:42:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:20.464 19:42:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:20.464 19:42:07 -- common/autotest_common.sh@10 -- # set +x 00:21:20.464 ************************************ 00:21:20.464 START TEST nvmf_fio_host 00:21:20.464 ************************************ 00:21:20.464 19:42:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:20.728 * Looking for test storage... 00:21:20.728 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:20.728 19:42:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:20.728 19:42:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:20.728 19:42:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:20.728 19:42:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:20.728 19:42:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:20.728 19:42:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:20.728 19:42:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:20.728 19:42:07 -- scripts/common.sh@335 -- # IFS=.-: 00:21:20.728 19:42:07 -- scripts/common.sh@335 -- # read -ra ver1 00:21:20.728 19:42:07 -- scripts/common.sh@336 -- # IFS=.-: 00:21:20.728 19:42:07 -- scripts/common.sh@336 -- # read -ra ver2 00:21:20.728 19:42:07 -- scripts/common.sh@337 -- # local 'op=<' 00:21:20.728 19:42:07 -- scripts/common.sh@339 -- # ver1_l=2 00:21:20.728 19:42:07 -- scripts/common.sh@340 -- # ver2_l=1 00:21:20.728 19:42:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:20.728 19:42:07 -- scripts/common.sh@343 -- # case "$op" in 00:21:20.728 19:42:07 -- scripts/common.sh@344 -- # : 1 00:21:20.728 19:42:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:20.728 19:42:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:20.728 19:42:07 -- scripts/common.sh@364 -- # decimal 1 00:21:20.728 19:42:07 -- scripts/common.sh@352 -- # local d=1 00:21:20.728 19:42:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:20.728 19:42:07 -- scripts/common.sh@354 -- # echo 1 00:21:20.728 19:42:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:20.728 19:42:07 -- scripts/common.sh@365 -- # decimal 2 00:21:20.728 19:42:07 -- scripts/common.sh@352 -- # local d=2 00:21:20.728 19:42:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:20.729 19:42:07 -- scripts/common.sh@354 -- # echo 2 00:21:20.729 19:42:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:20.729 19:42:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:20.729 19:42:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:20.729 19:42:07 -- scripts/common.sh@367 -- # return 0 00:21:20.729 19:42:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:20.729 19:42:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:20.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.729 --rc genhtml_branch_coverage=1 00:21:20.729 --rc genhtml_function_coverage=1 00:21:20.729 --rc genhtml_legend=1 00:21:20.729 --rc geninfo_all_blocks=1 00:21:20.729 --rc geninfo_unexecuted_blocks=1 00:21:20.729 00:21:20.729 ' 00:21:20.729 19:42:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:20.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.729 --rc genhtml_branch_coverage=1 00:21:20.729 --rc genhtml_function_coverage=1 00:21:20.729 --rc genhtml_legend=1 00:21:20.729 --rc geninfo_all_blocks=1 00:21:20.729 --rc geninfo_unexecuted_blocks=1 00:21:20.729 00:21:20.729 ' 00:21:20.729 19:42:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:20.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.729 --rc genhtml_branch_coverage=1 00:21:20.729 --rc genhtml_function_coverage=1 00:21:20.729 --rc genhtml_legend=1 00:21:20.729 --rc geninfo_all_blocks=1 00:21:20.729 --rc geninfo_unexecuted_blocks=1 00:21:20.729 00:21:20.729 ' 00:21:20.729 19:42:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:20.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.729 --rc genhtml_branch_coverage=1 00:21:20.729 --rc genhtml_function_coverage=1 00:21:20.729 --rc genhtml_legend=1 00:21:20.729 --rc geninfo_all_blocks=1 00:21:20.729 --rc geninfo_unexecuted_blocks=1 00:21:20.729 00:21:20.729 ' 00:21:20.729 19:42:07 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:20.729 19:42:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:20.729 19:42:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:20.729 19:42:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:20.729 19:42:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.729 19:42:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.729 19:42:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.729 19:42:07 -- paths/export.sh@5 -- # export PATH 00:21:20.729 19:42:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.729 19:42:07 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:20.729 19:42:07 -- nvmf/common.sh@7 -- # uname -s 00:21:20.729 19:42:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:20.729 19:42:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:20.729 19:42:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:20.729 19:42:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:20.729 19:42:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:20.729 19:42:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:20.729 19:42:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:20.729 19:42:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:20.729 19:42:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:20.729 19:42:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:20.729 19:42:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:21:20.729 19:42:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:21:20.729 19:42:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:20.729 19:42:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:20.729 19:42:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:20.729 19:42:07 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:20.729 19:42:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:20.729 19:42:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:20.729 19:42:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:20.729 19:42:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.729 19:42:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.729 19:42:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.729 19:42:07 -- paths/export.sh@5 -- # export PATH 00:21:20.729 19:42:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.729 19:42:07 -- nvmf/common.sh@46 -- # : 0 00:21:20.729 19:42:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:20.729 19:42:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:20.729 19:42:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:20.729 19:42:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:20.729 19:42:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:20.729 19:42:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:20.729 19:42:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:20.729 19:42:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:20.729 19:42:07 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:20.729 19:42:07 -- host/fio.sh@14 -- # nvmftestinit 00:21:20.729 19:42:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:20.729 19:42:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:20.729 19:42:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:20.729 19:42:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:20.729 19:42:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:20.729 19:42:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:20.729 19:42:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:20.729 19:42:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:20.729 19:42:07 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:20.729 19:42:07 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:20.729 19:42:07 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:20.729 19:42:07 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:20.729 19:42:07 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:20.729 19:42:07 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:20.729 19:42:07 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:20.729 19:42:07 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:20.729 19:42:07 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:20.729 19:42:07 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:20.729 19:42:07 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:20.729 19:42:07 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:20.729 19:42:07 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:20.729 19:42:07 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:20.729 19:42:07 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:20.729 19:42:07 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:20.729 19:42:07 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:20.729 19:42:07 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:20.729 19:42:07 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:20.729 19:42:07 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:20.729 Cannot find device "nvmf_tgt_br" 00:21:20.729 19:42:07 -- nvmf/common.sh@154 -- # true 00:21:20.729 19:42:07 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:20.729 Cannot find device "nvmf_tgt_br2" 00:21:20.729 19:42:07 -- nvmf/common.sh@155 -- # true 00:21:20.729 19:42:07 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:20.729 19:42:07 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:20.729 Cannot find device "nvmf_tgt_br" 00:21:20.729 19:42:07 -- nvmf/common.sh@157 -- # true 00:21:20.729 19:42:07 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:20.729 Cannot find device "nvmf_tgt_br2" 00:21:20.729 19:42:07 -- nvmf/common.sh@158 -- # true 00:21:20.730 19:42:07 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:20.989 19:42:07 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:20.989 19:42:07 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:20.989 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:20.989 19:42:07 -- nvmf/common.sh@161 -- # true 00:21:20.989 19:42:07 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:20.989 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:20.989 19:42:07 -- nvmf/common.sh@162 -- # true 00:21:20.989 19:42:07 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:20.989 19:42:07 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:20.989 19:42:07 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:20.989 19:42:07 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:20.989 19:42:07 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:20.989 19:42:07 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:20.989 19:42:07 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:20.989 19:42:07 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:20.989 19:42:07 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:20.989 19:42:07 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:20.989 19:42:07 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:20.989 19:42:07 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:20.989 19:42:07 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:20.989 19:42:07 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:20.989 19:42:07 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:20.989 19:42:07 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:20.989 19:42:07 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:20.989 19:42:07 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:20.989 19:42:07 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:20.989 19:42:07 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:20.989 19:42:07 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:20.989 19:42:07 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:20.989 19:42:07 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:20.989 19:42:07 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:20.989 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:20.989 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:21:20.989 00:21:20.989 --- 10.0.0.2 ping statistics --- 00:21:20.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:20.989 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:21:20.989 19:42:07 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:20.989 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:20.989 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:21:20.989 00:21:20.989 --- 10.0.0.3 ping statistics --- 00:21:20.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:20.989 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:21:20.989 19:42:07 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:20.989 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:20.989 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:21:20.989 00:21:20.989 --- 10.0.0.1 ping statistics --- 00:21:20.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:20.989 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:21:21.248 19:42:07 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:21.248 19:42:07 -- nvmf/common.sh@421 -- # return 0 00:21:21.248 19:42:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:21.248 19:42:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:21.248 19:42:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:21.248 19:42:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:21.248 19:42:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:21.248 19:42:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:21.248 19:42:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:21.248 19:42:07 -- host/fio.sh@16 -- # [[ y != y ]] 00:21:21.248 19:42:07 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:21.248 19:42:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:21.248 19:42:07 -- common/autotest_common.sh@10 -- # set +x 00:21:21.248 19:42:07 -- host/fio.sh@24 -- # nvmfpid=94598 00:21:21.248 19:42:07 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:21.248 19:42:07 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:21.248 19:42:07 -- host/fio.sh@28 -- # waitforlisten 94598 00:21:21.248 19:42:07 -- common/autotest_common.sh@829 -- # '[' -z 94598 ']' 00:21:21.248 19:42:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.248 19:42:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:21.248 19:42:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.248 19:42:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:21.248 19:42:07 -- common/autotest_common.sh@10 -- # set +x 00:21:21.248 [2024-12-15 19:42:07.948979] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:21.248 [2024-12-15 19:42:07.949599] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:21.248 [2024-12-15 19:42:08.086207] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:21.506 [2024-12-15 19:42:08.172472] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:21.506 [2024-12-15 19:42:08.172669] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:21.506 [2024-12-15 19:42:08.172688] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:21.506 [2024-12-15 19:42:08.172700] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:21.506 [2024-12-15 19:42:08.172847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:21.506 [2024-12-15 19:42:08.173254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:21.506 [2024-12-15 19:42:08.173727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:21.506 [2024-12-15 19:42:08.173761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:22.443 19:42:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:22.443 19:42:08 -- common/autotest_common.sh@862 -- # return 0 00:21:22.443 19:42:08 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:22.443 [2024-12-15 19:42:09.247076] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:22.443 19:42:09 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:22.443 19:42:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:22.443 19:42:09 -- common/autotest_common.sh@10 -- # set +x 00:21:22.443 19:42:09 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:23.011 Malloc1 00:21:23.011 19:42:09 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:23.270 19:42:09 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:23.529 19:42:10 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:23.788 [2024-12-15 19:42:10.533636] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:23.788 19:42:10 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:24.047 19:42:10 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:21:24.047 19:42:10 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:24.048 19:42:10 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:24.048 19:42:10 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:24.048 19:42:10 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:24.048 19:42:10 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:24.048 19:42:10 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:24.048 19:42:10 -- common/autotest_common.sh@1330 -- # shift 00:21:24.048 19:42:10 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:24.048 19:42:10 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:24.048 19:42:10 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:24.048 19:42:10 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:24.048 19:42:10 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:24.048 19:42:10 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:24.048 19:42:10 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:24.048 19:42:10 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:24.048 19:42:10 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:24.048 19:42:10 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:24.048 19:42:10 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:24.048 19:42:10 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:24.048 19:42:10 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:24.048 19:42:10 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:24.048 19:42:10 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:24.307 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:24.307 fio-3.35 00:21:24.307 Starting 1 thread 00:21:26.840 00:21:26.840 test: (groupid=0, jobs=1): err= 0: pid=94725: Sun Dec 15 19:42:13 2024 00:21:26.840 read: IOPS=10.9k, BW=42.5MiB/s (44.6MB/s)(85.2MiB/2005msec) 00:21:26.840 slat (nsec): min=1666, max=358807, avg=2502.08, stdev=3592.98 00:21:26.840 clat (usec): min=3242, max=10934, avg=6259.01, stdev=550.59 00:21:26.840 lat (usec): min=3285, max=10940, avg=6261.51, stdev=550.59 00:21:26.840 clat percentiles (usec): 00:21:26.840 | 1.00th=[ 5145], 5.00th=[ 5473], 10.00th=[ 5604], 20.00th=[ 5800], 00:21:26.840 | 30.00th=[ 5997], 40.00th=[ 6063], 50.00th=[ 6194], 60.00th=[ 6325], 00:21:26.840 | 70.00th=[ 6456], 80.00th=[ 6652], 90.00th=[ 6915], 95.00th=[ 7177], 00:21:26.840 | 99.00th=[ 7701], 99.50th=[ 8029], 99.90th=[10028], 99.95th=[10552], 00:21:26.840 | 99.99th=[10814] 00:21:26.840 bw ( KiB/s): min=42272, max=44480, per=99.91%, avg=43472.00, stdev=971.95, samples=4 00:21:26.840 iops : min=10568, max=11120, avg=10868.00, stdev=242.99, samples=4 00:21:26.840 write: IOPS=10.9k, BW=42.4MiB/s (44.5MB/s)(85.0MiB/2005msec); 0 zone resets 00:21:26.840 slat (nsec): min=1730, max=1315.1k, avg=2627.47, stdev=9345.20 00:21:26.840 clat (usec): min=2496, max=9725, avg=5476.70, stdev=449.07 00:21:26.840 lat (usec): min=2510, max=9727, avg=5479.33, stdev=449.17 00:21:26.840 clat percentiles (usec): 00:21:26.840 | 1.00th=[ 4555], 5.00th=[ 4817], 10.00th=[ 4948], 20.00th=[ 5145], 00:21:26.840 | 30.00th=[ 5276], 40.00th=[ 5342], 50.00th=[ 5473], 60.00th=[ 5538], 00:21:26.841 | 70.00th=[ 5669], 80.00th=[ 5800], 90.00th=[ 5997], 95.00th=[ 6194], 00:21:26.841 | 99.00th=[ 6652], 99.50th=[ 6849], 99.90th=[ 8455], 99.95th=[ 9241], 00:21:26.841 | 99.99th=[ 9634] 00:21:26.841 bw ( KiB/s): min=42704, max=43712, per=100.00%, avg=43414.00, stdev=477.21, samples=4 00:21:26.841 iops : min=10676, max=10928, avg=10853.50, stdev=119.30, samples=4 00:21:26.841 lat (msec) : 4=0.07%, 10=99.88%, 20=0.05% 00:21:26.841 cpu : usr=65.29%, sys=24.29%, ctx=17, majf=0, minf=5 00:21:26.841 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:21:26.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:26.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:26.841 issued rwts: total=21811,21759,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:26.841 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:26.841 00:21:26.841 Run status group 0 (all jobs): 00:21:26.841 READ: bw=42.5MiB/s (44.6MB/s), 42.5MiB/s-42.5MiB/s (44.6MB/s-44.6MB/s), io=85.2MiB (89.3MB), run=2005-2005msec 00:21:26.841 WRITE: bw=42.4MiB/s (44.5MB/s), 42.4MiB/s-42.4MiB/s (44.5MB/s-44.5MB/s), io=85.0MiB (89.1MB), run=2005-2005msec 00:21:26.841 19:42:13 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:26.841 19:42:13 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:26.841 19:42:13 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:26.841 19:42:13 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:26.841 19:42:13 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:26.841 19:42:13 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:26.841 19:42:13 -- common/autotest_common.sh@1330 -- # shift 00:21:26.841 19:42:13 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:26.841 19:42:13 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:26.841 19:42:13 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:26.841 19:42:13 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:26.841 19:42:13 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:26.841 19:42:13 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:26.841 19:42:13 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:26.841 19:42:13 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:26.841 19:42:13 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:26.841 19:42:13 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:26.841 19:42:13 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:26.841 19:42:13 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:26.841 19:42:13 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:26.841 19:42:13 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:26.841 19:42:13 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:26.841 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:26.841 fio-3.35 00:21:26.841 Starting 1 thread 00:21:29.375 00:21:29.375 test: (groupid=0, jobs=1): err= 0: pid=94768: Sun Dec 15 19:42:15 2024 00:21:29.375 read: IOPS=8049, BW=126MiB/s (132MB/s)(252MiB/2005msec) 00:21:29.375 slat (usec): min=2, max=135, avg= 3.86, stdev= 3.18 00:21:29.375 clat (usec): min=2338, max=17097, avg=9577.21, stdev=2228.15 00:21:29.375 lat (usec): min=2341, max=17114, avg=9581.08, stdev=2228.24 00:21:29.375 clat percentiles (usec): 00:21:29.375 | 1.00th=[ 4883], 5.00th=[ 6194], 10.00th=[ 6718], 20.00th=[ 7570], 00:21:29.375 | 30.00th=[ 8291], 40.00th=[ 8979], 50.00th=[ 9503], 60.00th=[10159], 00:21:29.375 | 70.00th=[10814], 80.00th=[11469], 90.00th=[12387], 95.00th=[13173], 00:21:29.375 | 99.00th=[15270], 99.50th=[15926], 99.90th=[16581], 99.95th=[16909], 00:21:29.375 | 99.99th=[16909] 00:21:29.375 bw ( KiB/s): min=58400, max=74912, per=51.68%, avg=66560.00, stdev=8775.51, samples=4 00:21:29.375 iops : min= 3650, max= 4682, avg=4160.00, stdev=548.47, samples=4 00:21:29.375 write: IOPS=4692, BW=73.3MiB/s (76.9MB/s)(135MiB/1848msec); 0 zone resets 00:21:29.375 slat (usec): min=27, max=217, avg=39.28, stdev=11.21 00:21:29.375 clat (usec): min=3549, max=16853, avg=11015.32, stdev=1678.31 00:21:29.375 lat (usec): min=3583, max=16904, avg=11054.60, stdev=1679.68 00:21:29.375 clat percentiles (usec): 00:21:29.375 | 1.00th=[ 7701], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[ 9634], 00:21:29.375 | 30.00th=[10028], 40.00th=[10552], 50.00th=[10945], 60.00th=[11338], 00:21:29.375 | 70.00th=[11731], 80.00th=[12256], 90.00th=[13173], 95.00th=[13960], 00:21:29.375 | 99.00th=[15533], 99.50th=[15926], 99.90th=[16319], 99.95th=[16581], 00:21:29.375 | 99.99th=[16909] 00:21:29.375 bw ( KiB/s): min=59264, max=78848, per=91.73%, avg=68864.00, stdev=9699.33, samples=4 00:21:29.375 iops : min= 3704, max= 4928, avg=4304.00, stdev=606.21, samples=4 00:21:29.375 lat (msec) : 4=0.23%, 10=47.16%, 20=52.61% 00:21:29.375 cpu : usr=65.07%, sys=21.76%, ctx=4, majf=0, minf=1 00:21:29.375 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:21:29.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:29.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:29.375 issued rwts: total=16140,8671,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:29.375 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:29.375 00:21:29.375 Run status group 0 (all jobs): 00:21:29.375 READ: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=252MiB (264MB), run=2005-2005msec 00:21:29.375 WRITE: bw=73.3MiB/s (76.9MB/s), 73.3MiB/s-73.3MiB/s (76.9MB/s-76.9MB/s), io=135MiB (142MB), run=1848-1848msec 00:21:29.375 19:42:15 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:29.375 19:42:16 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:21:29.375 19:42:16 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:21:29.375 19:42:16 -- host/fio.sh@51 -- # get_nvme_bdfs 00:21:29.375 19:42:16 -- common/autotest_common.sh@1508 -- # bdfs=() 00:21:29.375 19:42:16 -- common/autotest_common.sh@1508 -- # local bdfs 00:21:29.375 19:42:16 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:29.376 19:42:16 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:21:29.376 19:42:16 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:29.376 19:42:16 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:21:29.376 19:42:16 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:21:29.376 19:42:16 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:21:29.634 Nvme0n1 00:21:29.634 19:42:16 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:21:29.893 19:42:16 -- host/fio.sh@53 -- # ls_guid=4ade66b3-3393-47c2-9989-60e8305da7c2 00:21:29.893 19:42:16 -- host/fio.sh@54 -- # get_lvs_free_mb 4ade66b3-3393-47c2-9989-60e8305da7c2 00:21:29.893 19:42:16 -- common/autotest_common.sh@1353 -- # local lvs_uuid=4ade66b3-3393-47c2-9989-60e8305da7c2 00:21:29.893 19:42:16 -- common/autotest_common.sh@1354 -- # local lvs_info 00:21:29.893 19:42:16 -- common/autotest_common.sh@1355 -- # local fc 00:21:29.893 19:42:16 -- common/autotest_common.sh@1356 -- # local cs 00:21:29.893 19:42:16 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:30.152 19:42:17 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:21:30.152 { 00:21:30.152 "base_bdev": "Nvme0n1", 00:21:30.152 "block_size": 4096, 00:21:30.152 "cluster_size": 1073741824, 00:21:30.152 "free_clusters": 4, 00:21:30.152 "name": "lvs_0", 00:21:30.152 "total_data_clusters": 4, 00:21:30.152 "uuid": "4ade66b3-3393-47c2-9989-60e8305da7c2" 00:21:30.152 } 00:21:30.152 ]' 00:21:30.152 19:42:17 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="4ade66b3-3393-47c2-9989-60e8305da7c2") .free_clusters' 00:21:30.411 19:42:17 -- common/autotest_common.sh@1358 -- # fc=4 00:21:30.411 19:42:17 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="4ade66b3-3393-47c2-9989-60e8305da7c2") .cluster_size' 00:21:30.411 4096 00:21:30.411 19:42:17 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:21:30.411 19:42:17 -- common/autotest_common.sh@1362 -- # free_mb=4096 00:21:30.411 19:42:17 -- common/autotest_common.sh@1363 -- # echo 4096 00:21:30.411 19:42:17 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:21:30.670 427419d7-2d11-4d83-9063-df35a584bfa1 00:21:30.670 19:42:17 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:21:30.929 19:42:17 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:21:31.188 19:42:17 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:31.447 19:42:18 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:31.447 19:42:18 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:31.447 19:42:18 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:31.447 19:42:18 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:31.447 19:42:18 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:31.447 19:42:18 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:31.447 19:42:18 -- common/autotest_common.sh@1330 -- # shift 00:21:31.447 19:42:18 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:31.447 19:42:18 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:31.447 19:42:18 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:31.447 19:42:18 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:31.447 19:42:18 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:31.447 19:42:18 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:31.447 19:42:18 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:31.447 19:42:18 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:31.447 19:42:18 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:31.447 19:42:18 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:31.447 19:42:18 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:31.447 19:42:18 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:31.447 19:42:18 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:31.447 19:42:18 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:31.447 19:42:18 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:31.447 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:31.447 fio-3.35 00:21:31.447 Starting 1 thread 00:21:33.982 00:21:33.982 test: (groupid=0, jobs=1): err= 0: pid=94926: Sun Dec 15 19:42:20 2024 00:21:33.982 read: IOPS=6509, BW=25.4MiB/s (26.7MB/s)(51.0MiB/2007msec) 00:21:33.982 slat (nsec): min=1675, max=361375, avg=2879.57, stdev=4914.80 00:21:33.982 clat (usec): min=4177, max=19317, avg=10505.36, stdev=1032.12 00:21:33.982 lat (usec): min=4186, max=19320, avg=10508.24, stdev=1031.94 00:21:33.982 clat percentiles (usec): 00:21:33.982 | 1.00th=[ 8160], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9634], 00:21:33.982 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10421], 60.00th=[10683], 00:21:33.982 | 70.00th=[10945], 80.00th=[11338], 90.00th=[11731], 95.00th=[12125], 00:21:33.982 | 99.00th=[12780], 99.50th=[13173], 99.90th=[18220], 99.95th=[18482], 00:21:33.982 | 99.99th=[19268] 00:21:33.982 bw ( KiB/s): min=24792, max=26880, per=99.76%, avg=25974.00, stdev=895.18, samples=4 00:21:33.982 iops : min= 6198, max= 6720, avg=6493.50, stdev=223.79, samples=4 00:21:33.982 write: IOPS=6514, BW=25.4MiB/s (26.7MB/s)(51.1MiB/2007msec); 0 zone resets 00:21:33.982 slat (nsec): min=1778, max=262366, avg=3018.12, stdev=3691.84 00:21:33.982 clat (usec): min=2442, max=15574, avg=9065.27, stdev=851.22 00:21:33.982 lat (usec): min=2455, max=15576, avg=9068.29, stdev=851.10 00:21:33.982 clat percentiles (usec): 00:21:33.982 | 1.00th=[ 7111], 5.00th=[ 7767], 10.00th=[ 8029], 20.00th=[ 8455], 00:21:33.982 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9241], 00:21:33.982 | 70.00th=[ 9503], 80.00th=[ 9765], 90.00th=[10159], 95.00th=[10421], 00:21:33.982 | 99.00th=[10945], 99.50th=[11207], 99.90th=[13698], 99.95th=[15270], 00:21:33.982 | 99.99th=[15533] 00:21:33.982 bw ( KiB/s): min=25904, max=26304, per=99.97%, avg=26050.00, stdev=174.83, samples=4 00:21:33.982 iops : min= 6476, max= 6576, avg=6512.50, stdev=43.71, samples=4 00:21:33.982 lat (msec) : 4=0.03%, 10=59.41%, 20=40.56% 00:21:33.982 cpu : usr=67.95%, sys=23.68%, ctx=6, majf=0, minf=5 00:21:33.982 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:33.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:33.982 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:33.982 issued rwts: total=13064,13075,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:33.982 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:33.982 00:21:33.982 Run status group 0 (all jobs): 00:21:33.982 READ: bw=25.4MiB/s (26.7MB/s), 25.4MiB/s-25.4MiB/s (26.7MB/s-26.7MB/s), io=51.0MiB (53.5MB), run=2007-2007msec 00:21:33.982 WRITE: bw=25.4MiB/s (26.7MB/s), 25.4MiB/s-25.4MiB/s (26.7MB/s-26.7MB/s), io=51.1MiB (53.6MB), run=2007-2007msec 00:21:33.982 19:42:20 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:34.242 19:42:20 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:21:34.501 19:42:21 -- host/fio.sh@64 -- # ls_nested_guid=1986d88b-b7ac-4c47-adf3-04c9f3de3de7 00:21:34.501 19:42:21 -- host/fio.sh@65 -- # get_lvs_free_mb 1986d88b-b7ac-4c47-adf3-04c9f3de3de7 00:21:34.501 19:42:21 -- common/autotest_common.sh@1353 -- # local lvs_uuid=1986d88b-b7ac-4c47-adf3-04c9f3de3de7 00:21:34.501 19:42:21 -- common/autotest_common.sh@1354 -- # local lvs_info 00:21:34.501 19:42:21 -- common/autotest_common.sh@1355 -- # local fc 00:21:34.501 19:42:21 -- common/autotest_common.sh@1356 -- # local cs 00:21:34.501 19:42:21 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:34.760 19:42:21 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:21:34.760 { 00:21:34.760 "base_bdev": "Nvme0n1", 00:21:34.760 "block_size": 4096, 00:21:34.760 "cluster_size": 1073741824, 00:21:34.760 "free_clusters": 0, 00:21:34.760 "name": "lvs_0", 00:21:34.760 "total_data_clusters": 4, 00:21:34.760 "uuid": "4ade66b3-3393-47c2-9989-60e8305da7c2" 00:21:34.760 }, 00:21:34.760 { 00:21:34.761 "base_bdev": "427419d7-2d11-4d83-9063-df35a584bfa1", 00:21:34.761 "block_size": 4096, 00:21:34.761 "cluster_size": 4194304, 00:21:34.761 "free_clusters": 1022, 00:21:34.761 "name": "lvs_n_0", 00:21:34.761 "total_data_clusters": 1022, 00:21:34.761 "uuid": "1986d88b-b7ac-4c47-adf3-04c9f3de3de7" 00:21:34.761 } 00:21:34.761 ]' 00:21:34.761 19:42:21 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="1986d88b-b7ac-4c47-adf3-04c9f3de3de7") .free_clusters' 00:21:34.761 19:42:21 -- common/autotest_common.sh@1358 -- # fc=1022 00:21:34.761 19:42:21 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="1986d88b-b7ac-4c47-adf3-04c9f3de3de7") .cluster_size' 00:21:34.761 19:42:21 -- common/autotest_common.sh@1359 -- # cs=4194304 00:21:34.761 19:42:21 -- common/autotest_common.sh@1362 -- # free_mb=4088 00:21:34.761 4088 00:21:34.761 19:42:21 -- common/autotest_common.sh@1363 -- # echo 4088 00:21:34.761 19:42:21 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:21:35.019 a39c3c04-c7e4-4904-872c-66967009ba7e 00:21:35.019 19:42:21 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:21:35.278 19:42:22 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:21:35.536 19:42:22 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:35.794 19:42:22 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:35.794 19:42:22 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:35.794 19:42:22 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:35.794 19:42:22 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:35.794 19:42:22 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:35.794 19:42:22 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:35.794 19:42:22 -- common/autotest_common.sh@1330 -- # shift 00:21:35.794 19:42:22 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:35.794 19:42:22 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:35.794 19:42:22 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:35.794 19:42:22 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:35.794 19:42:22 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:35.794 19:42:22 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:35.794 19:42:22 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:35.794 19:42:22 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:35.794 19:42:22 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:35.794 19:42:22 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:35.794 19:42:22 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:35.794 19:42:22 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:35.794 19:42:22 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:35.794 19:42:22 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:35.794 19:42:22 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:36.052 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:36.052 fio-3.35 00:21:36.052 Starting 1 thread 00:21:38.624 00:21:38.624 test: (groupid=0, jobs=1): err= 0: pid=95047: Sun Dec 15 19:42:25 2024 00:21:38.624 read: IOPS=5912, BW=23.1MiB/s (24.2MB/s)(46.4MiB/2008msec) 00:21:38.624 slat (nsec): min=1730, max=258103, avg=2852.77, stdev=4135.76 00:21:38.624 clat (usec): min=4399, max=20765, avg=11579.80, stdev=1139.89 00:21:38.624 lat (usec): min=4406, max=20768, avg=11582.65, stdev=1139.76 00:21:38.624 clat percentiles (usec): 00:21:38.624 | 1.00th=[ 9241], 5.00th=[ 9896], 10.00th=[10290], 20.00th=[10683], 00:21:38.624 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11600], 60.00th=[11863], 00:21:38.624 | 70.00th=[12125], 80.00th=[12518], 90.00th=[13042], 95.00th=[13435], 00:21:38.624 | 99.00th=[14091], 99.50th=[14484], 99.90th=[19530], 99.95th=[20579], 00:21:38.624 | 99.99th=[20579] 00:21:38.624 bw ( KiB/s): min=22672, max=24080, per=99.85%, avg=23614.00, stdev=660.23, samples=4 00:21:38.624 iops : min= 5668, max= 6020, avg=5903.50, stdev=165.06, samples=4 00:21:38.624 write: IOPS=5911, BW=23.1MiB/s (24.2MB/s)(46.4MiB/2008msec); 0 zone resets 00:21:38.624 slat (nsec): min=1777, max=176207, avg=3001.39, stdev=3283.28 00:21:38.624 clat (usec): min=2043, max=17646, avg=10017.56, stdev=937.60 00:21:38.624 lat (usec): min=2053, max=17649, avg=10020.56, stdev=937.54 00:21:38.624 clat percentiles (usec): 00:21:38.624 | 1.00th=[ 7832], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[ 9241], 00:21:38.624 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10290], 00:21:38.624 | 70.00th=[10421], 80.00th=[10814], 90.00th=[11076], 95.00th=[11469], 00:21:38.624 | 99.00th=[11994], 99.50th=[12387], 99.90th=[15139], 99.95th=[16712], 00:21:38.624 | 99.99th=[17433] 00:21:38.624 bw ( KiB/s): min=23424, max=23808, per=99.88%, avg=23618.00, stdev=164.26, samples=4 00:21:38.624 iops : min= 5856, max= 5952, avg=5904.50, stdev=41.06, samples=4 00:21:38.624 lat (msec) : 4=0.04%, 10=28.03%, 20=71.89%, 50=0.04% 00:21:38.624 cpu : usr=67.02%, sys=25.36%, ctx=24, majf=0, minf=5 00:21:38.624 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:21:38.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:38.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:38.624 issued rwts: total=11872,11871,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:38.624 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:38.624 00:21:38.624 Run status group 0 (all jobs): 00:21:38.624 READ: bw=23.1MiB/s (24.2MB/s), 23.1MiB/s-23.1MiB/s (24.2MB/s-24.2MB/s), io=46.4MiB (48.6MB), run=2008-2008msec 00:21:38.624 WRITE: bw=23.1MiB/s (24.2MB/s), 23.1MiB/s-23.1MiB/s (24.2MB/s-24.2MB/s), io=46.4MiB (48.6MB), run=2008-2008msec 00:21:38.624 19:42:25 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:38.624 19:42:25 -- host/fio.sh@74 -- # sync 00:21:38.624 19:42:25 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:21:38.882 19:42:25 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:39.141 19:42:25 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:21:39.400 19:42:26 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:39.658 19:42:26 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:21:40.225 19:42:26 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:40.225 19:42:26 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:40.225 19:42:26 -- host/fio.sh@86 -- # nvmftestfini 00:21:40.225 19:42:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:40.225 19:42:26 -- nvmf/common.sh@116 -- # sync 00:21:40.225 19:42:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:40.225 19:42:26 -- nvmf/common.sh@119 -- # set +e 00:21:40.225 19:42:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:40.225 19:42:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:40.225 rmmod nvme_tcp 00:21:40.225 rmmod nvme_fabrics 00:21:40.225 rmmod nvme_keyring 00:21:40.225 19:42:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:40.225 19:42:26 -- nvmf/common.sh@123 -- # set -e 00:21:40.225 19:42:26 -- nvmf/common.sh@124 -- # return 0 00:21:40.225 19:42:26 -- nvmf/common.sh@477 -- # '[' -n 94598 ']' 00:21:40.225 19:42:26 -- nvmf/common.sh@478 -- # killprocess 94598 00:21:40.225 19:42:26 -- common/autotest_common.sh@936 -- # '[' -z 94598 ']' 00:21:40.225 19:42:26 -- common/autotest_common.sh@940 -- # kill -0 94598 00:21:40.225 19:42:26 -- common/autotest_common.sh@941 -- # uname 00:21:40.225 19:42:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:40.225 19:42:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94598 00:21:40.225 killing process with pid 94598 00:21:40.225 19:42:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:40.225 19:42:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:40.225 19:42:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94598' 00:21:40.225 19:42:26 -- common/autotest_common.sh@955 -- # kill 94598 00:21:40.225 19:42:26 -- common/autotest_common.sh@960 -- # wait 94598 00:21:40.484 19:42:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:40.484 19:42:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:40.484 19:42:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:40.484 19:42:27 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:40.484 19:42:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:40.484 19:42:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.484 19:42:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:40.484 19:42:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.484 19:42:27 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:40.484 00:21:40.484 real 0m19.985s 00:21:40.484 user 1m27.196s 00:21:40.484 sys 0m4.924s 00:21:40.484 19:42:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:40.484 ************************************ 00:21:40.484 END TEST nvmf_fio_host 00:21:40.484 19:42:27 -- common/autotest_common.sh@10 -- # set +x 00:21:40.484 ************************************ 00:21:40.484 19:42:27 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:40.484 19:42:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:40.484 19:42:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:40.484 19:42:27 -- common/autotest_common.sh@10 -- # set +x 00:21:40.484 ************************************ 00:21:40.484 START TEST nvmf_failover 00:21:40.484 ************************************ 00:21:40.484 19:42:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:40.742 * Looking for test storage... 00:21:40.742 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:40.742 19:42:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:40.743 19:42:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:40.743 19:42:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:40.743 19:42:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:40.743 19:42:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:40.743 19:42:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:40.743 19:42:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:40.743 19:42:27 -- scripts/common.sh@335 -- # IFS=.-: 00:21:40.743 19:42:27 -- scripts/common.sh@335 -- # read -ra ver1 00:21:40.743 19:42:27 -- scripts/common.sh@336 -- # IFS=.-: 00:21:40.743 19:42:27 -- scripts/common.sh@336 -- # read -ra ver2 00:21:40.743 19:42:27 -- scripts/common.sh@337 -- # local 'op=<' 00:21:40.743 19:42:27 -- scripts/common.sh@339 -- # ver1_l=2 00:21:40.743 19:42:27 -- scripts/common.sh@340 -- # ver2_l=1 00:21:40.743 19:42:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:40.743 19:42:27 -- scripts/common.sh@343 -- # case "$op" in 00:21:40.743 19:42:27 -- scripts/common.sh@344 -- # : 1 00:21:40.743 19:42:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:40.743 19:42:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:40.743 19:42:27 -- scripts/common.sh@364 -- # decimal 1 00:21:40.743 19:42:27 -- scripts/common.sh@352 -- # local d=1 00:21:40.743 19:42:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:40.743 19:42:27 -- scripts/common.sh@354 -- # echo 1 00:21:40.743 19:42:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:40.743 19:42:27 -- scripts/common.sh@365 -- # decimal 2 00:21:40.743 19:42:27 -- scripts/common.sh@352 -- # local d=2 00:21:40.743 19:42:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:40.743 19:42:27 -- scripts/common.sh@354 -- # echo 2 00:21:40.743 19:42:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:40.743 19:42:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:40.743 19:42:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:40.743 19:42:27 -- scripts/common.sh@367 -- # return 0 00:21:40.743 19:42:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:40.743 19:42:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:40.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.743 --rc genhtml_branch_coverage=1 00:21:40.743 --rc genhtml_function_coverage=1 00:21:40.743 --rc genhtml_legend=1 00:21:40.743 --rc geninfo_all_blocks=1 00:21:40.743 --rc geninfo_unexecuted_blocks=1 00:21:40.743 00:21:40.743 ' 00:21:40.743 19:42:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:40.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.743 --rc genhtml_branch_coverage=1 00:21:40.743 --rc genhtml_function_coverage=1 00:21:40.743 --rc genhtml_legend=1 00:21:40.743 --rc geninfo_all_blocks=1 00:21:40.743 --rc geninfo_unexecuted_blocks=1 00:21:40.743 00:21:40.743 ' 00:21:40.743 19:42:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:40.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.743 --rc genhtml_branch_coverage=1 00:21:40.743 --rc genhtml_function_coverage=1 00:21:40.743 --rc genhtml_legend=1 00:21:40.743 --rc geninfo_all_blocks=1 00:21:40.743 --rc geninfo_unexecuted_blocks=1 00:21:40.743 00:21:40.743 ' 00:21:40.743 19:42:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:40.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.743 --rc genhtml_branch_coverage=1 00:21:40.743 --rc genhtml_function_coverage=1 00:21:40.743 --rc genhtml_legend=1 00:21:40.743 --rc geninfo_all_blocks=1 00:21:40.743 --rc geninfo_unexecuted_blocks=1 00:21:40.743 00:21:40.743 ' 00:21:40.743 19:42:27 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:40.743 19:42:27 -- nvmf/common.sh@7 -- # uname -s 00:21:40.743 19:42:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:40.743 19:42:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:40.743 19:42:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:40.743 19:42:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:40.743 19:42:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:40.743 19:42:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:40.743 19:42:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:40.743 19:42:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:40.743 19:42:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:40.743 19:42:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:40.743 19:42:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:21:40.743 19:42:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:21:40.743 19:42:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:40.743 19:42:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:40.743 19:42:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:40.743 19:42:27 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:40.743 19:42:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:40.743 19:42:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:40.743 19:42:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:40.743 19:42:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.743 19:42:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.743 19:42:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.743 19:42:27 -- paths/export.sh@5 -- # export PATH 00:21:40.743 19:42:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.743 19:42:27 -- nvmf/common.sh@46 -- # : 0 00:21:40.743 19:42:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:40.743 19:42:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:40.743 19:42:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:40.743 19:42:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:40.743 19:42:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:40.743 19:42:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:40.743 19:42:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:40.743 19:42:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:40.743 19:42:27 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:40.743 19:42:27 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:40.743 19:42:27 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:40.743 19:42:27 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:40.743 19:42:27 -- host/failover.sh@18 -- # nvmftestinit 00:21:40.743 19:42:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:40.743 19:42:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:40.743 19:42:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:40.743 19:42:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:40.743 19:42:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:40.743 19:42:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.743 19:42:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:40.743 19:42:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.743 19:42:27 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:40.743 19:42:27 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:40.743 19:42:27 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:40.743 19:42:27 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:40.743 19:42:27 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:40.743 19:42:27 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:40.743 19:42:27 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:40.743 19:42:27 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:40.743 19:42:27 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:40.743 19:42:27 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:40.743 19:42:27 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:40.743 19:42:27 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:40.743 19:42:27 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:40.743 19:42:27 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:40.743 19:42:27 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:40.743 19:42:27 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:40.743 19:42:27 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:40.743 19:42:27 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:40.743 19:42:27 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:40.743 19:42:27 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:40.743 Cannot find device "nvmf_tgt_br" 00:21:40.743 19:42:27 -- nvmf/common.sh@154 -- # true 00:21:40.743 19:42:27 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:40.743 Cannot find device "nvmf_tgt_br2" 00:21:40.743 19:42:27 -- nvmf/common.sh@155 -- # true 00:21:40.743 19:42:27 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:40.743 19:42:27 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:40.743 Cannot find device "nvmf_tgt_br" 00:21:40.743 19:42:27 -- nvmf/common.sh@157 -- # true 00:21:40.743 19:42:27 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:41.002 Cannot find device "nvmf_tgt_br2" 00:21:41.002 19:42:27 -- nvmf/common.sh@158 -- # true 00:21:41.002 19:42:27 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:41.002 19:42:27 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:41.002 19:42:27 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:41.002 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:41.002 19:42:27 -- nvmf/common.sh@161 -- # true 00:21:41.002 19:42:27 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:41.002 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:41.002 19:42:27 -- nvmf/common.sh@162 -- # true 00:21:41.002 19:42:27 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:41.002 19:42:27 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:41.002 19:42:27 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:41.002 19:42:27 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:41.002 19:42:27 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:41.002 19:42:27 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:41.002 19:42:27 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:41.002 19:42:27 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:41.002 19:42:27 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:41.002 19:42:27 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:41.002 19:42:27 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:41.002 19:42:27 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:41.002 19:42:27 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:41.002 19:42:27 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:41.002 19:42:27 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:41.002 19:42:27 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:41.002 19:42:27 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:41.002 19:42:27 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:41.002 19:42:27 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:41.002 19:42:27 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:41.002 19:42:27 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:41.002 19:42:27 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:41.002 19:42:27 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:41.002 19:42:27 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:41.002 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:41.002 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:21:41.002 00:21:41.002 --- 10.0.0.2 ping statistics --- 00:21:41.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.002 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:21:41.002 19:42:27 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:41.261 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:41.261 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:21:41.261 00:21:41.261 --- 10.0.0.3 ping statistics --- 00:21:41.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.261 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:21:41.261 19:42:27 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:41.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:41.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:21:41.261 00:21:41.261 --- 10.0.0.1 ping statistics --- 00:21:41.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.261 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:21:41.261 19:42:27 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:41.261 19:42:27 -- nvmf/common.sh@421 -- # return 0 00:21:41.261 19:42:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:41.261 19:42:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:41.261 19:42:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:41.261 19:42:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:41.261 19:42:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:41.261 19:42:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:41.261 19:42:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:41.261 19:42:27 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:41.261 19:42:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:41.261 19:42:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:41.261 19:42:27 -- common/autotest_common.sh@10 -- # set +x 00:21:41.261 19:42:27 -- nvmf/common.sh@469 -- # nvmfpid=95325 00:21:41.261 19:42:27 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:41.261 19:42:27 -- nvmf/common.sh@470 -- # waitforlisten 95325 00:21:41.261 19:42:27 -- common/autotest_common.sh@829 -- # '[' -z 95325 ']' 00:21:41.261 19:42:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.261 19:42:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:41.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.261 19:42:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.261 19:42:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:41.261 19:42:27 -- common/autotest_common.sh@10 -- # set +x 00:21:41.261 [2024-12-15 19:42:27.991714] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:41.261 [2024-12-15 19:42:27.991795] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:41.261 [2024-12-15 19:42:28.130491] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:41.520 [2024-12-15 19:42:28.209418] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:41.520 [2024-12-15 19:42:28.209567] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:41.520 [2024-12-15 19:42:28.209580] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:41.520 [2024-12-15 19:42:28.209588] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:41.520 [2024-12-15 19:42:28.209761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:41.520 [2024-12-15 19:42:28.209923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:41.520 [2024-12-15 19:42:28.209931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:42.454 19:42:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:42.454 19:42:29 -- common/autotest_common.sh@862 -- # return 0 00:21:42.454 19:42:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:42.454 19:42:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:42.454 19:42:29 -- common/autotest_common.sh@10 -- # set +x 00:21:42.454 19:42:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:42.454 19:42:29 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:42.454 [2024-12-15 19:42:29.336636] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:42.712 19:42:29 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:42.712 Malloc0 00:21:42.970 19:42:29 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:42.970 19:42:29 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:43.229 19:42:30 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:43.487 [2024-12-15 19:42:30.250535] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:43.487 19:42:30 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:43.745 [2024-12-15 19:42:30.466723] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:43.745 19:42:30 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:44.003 [2024-12-15 19:42:30.691087] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:44.003 19:42:30 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:44.003 19:42:30 -- host/failover.sh@31 -- # bdevperf_pid=95441 00:21:44.003 19:42:30 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:44.003 19:42:30 -- host/failover.sh@34 -- # waitforlisten 95441 /var/tmp/bdevperf.sock 00:21:44.003 19:42:30 -- common/autotest_common.sh@829 -- # '[' -z 95441 ']' 00:21:44.003 19:42:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:44.003 19:42:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:44.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:44.003 19:42:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:44.003 19:42:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:44.003 19:42:30 -- common/autotest_common.sh@10 -- # set +x 00:21:44.939 19:42:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:44.939 19:42:31 -- common/autotest_common.sh@862 -- # return 0 00:21:44.939 19:42:31 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:45.505 NVMe0n1 00:21:45.505 19:42:32 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:45.763 00:21:45.763 19:42:32 -- host/failover.sh@39 -- # run_test_pid=95491 00:21:45.763 19:42:32 -- host/failover.sh@41 -- # sleep 1 00:21:45.763 19:42:32 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:46.698 19:42:33 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:46.956 [2024-12-15 19:42:33.828744] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacab0 is same with the state(5) to be set 00:21:46.956 [2024-12-15 19:42:33.828802] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacab0 is same with the state(5) to be set 00:21:46.956 [2024-12-15 19:42:33.828841] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacab0 is same with the state(5) to be set 00:21:46.956 [2024-12-15 19:42:33.828863] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacab0 is same with the state(5) to be set 00:21:46.956 [2024-12-15 19:42:33.828871] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacab0 is same with the state(5) to be set 00:21:46.956 [2024-12-15 19:42:33.828878] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacab0 is same with the state(5) to be set 00:21:46.956 [2024-12-15 19:42:33.828887] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacab0 is same with the state(5) to be set 00:21:46.956 [2024-12-15 19:42:33.828896] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacab0 is same with the state(5) to be set 00:21:46.956 [2024-12-15 19:42:33.828904] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacab0 is same with the state(5) to be set 00:21:46.956 [2024-12-15 19:42:33.828911] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacab0 is same with the state(5) to be set 00:21:46.956 [2024-12-15 19:42:33.828919] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacab0 is same with the state(5) to be set 00:21:46.956 [2024-12-15 19:42:33.828927] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacab0 is same with the state(5) to be set 00:21:46.956 [2024-12-15 19:42:33.828935] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacab0 is same with the state(5) to be set 00:21:46.956 [2024-12-15 19:42:33.828943] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacab0 is same with the state(5) to be set 00:21:46.956 [2024-12-15 19:42:33.828951] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacab0 is same with the state(5) to be set 00:21:46.956 [2024-12-15 19:42:33.828960] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacab0 is same with the state(5) to be set 00:21:46.956 [2024-12-15 19:42:33.828969] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacab0 is same with the state(5) to be set 00:21:46.956 [2024-12-15 19:42:33.828977] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacab0 is same with the state(5) to be set 00:21:46.956 [2024-12-15 19:42:33.828985] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacab0 is same with the state(5) to be set 00:21:46.956 [2024-12-15 19:42:33.828993] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacab0 is same with the state(5) to be set 00:21:46.956 [2024-12-15 19:42:33.829001] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacab0 is same with the state(5) to be set 00:21:46.956 [2024-12-15 19:42:33.829009] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacab0 is same with the state(5) to be set 00:21:46.956 [2024-12-15 19:42:33.829016] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacab0 is same with the state(5) to be set 00:21:46.956 [2024-12-15 19:42:33.829024] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacab0 is same with the state(5) to be set 00:21:46.956 [2024-12-15 19:42:33.829031] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacab0 is same with the state(5) to be set 00:21:46.956 [2024-12-15 19:42:33.829039] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacab0 is same with the state(5) to be set 00:21:46.956 [2024-12-15 19:42:33.829047] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacab0 is same with the state(5) to be set 00:21:46.956 19:42:33 -- host/failover.sh@45 -- # sleep 3 00:21:50.240 19:42:36 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:50.498 00:21:50.498 19:42:37 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:50.757 [2024-12-15 19:42:37.478884] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad920 is same with the state(5) to be set 00:21:50.757 [2024-12-15 19:42:37.478941] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad920 is same with the state(5) to be set 00:21:50.757 [2024-12-15 19:42:37.478954] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad920 is same with the state(5) to be set 00:21:50.757 [2024-12-15 19:42:37.478962] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad920 is same with the state(5) to be set 00:21:50.757 [2024-12-15 19:42:37.478971] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad920 is same with the state(5) to be set 00:21:50.757 [2024-12-15 19:42:37.478981] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad920 is same with the state(5) to be set 00:21:50.757 [2024-12-15 19:42:37.478990] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad920 is same with the state(5) to be set 00:21:50.757 [2024-12-15 19:42:37.478998] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad920 is same with the state(5) to be set 00:21:50.757 [2024-12-15 19:42:37.479006] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad920 is same with the state(5) to be set 00:21:50.757 [2024-12-15 19:42:37.479014] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad920 is same with the state(5) to be set 00:21:50.757 [2024-12-15 19:42:37.479022] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad920 is same with the state(5) to be set 00:21:50.757 [2024-12-15 19:42:37.479030] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad920 is same with the state(5) to be set 00:21:50.757 [2024-12-15 19:42:37.479040] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad920 is same with the state(5) to be set 00:21:50.757 [2024-12-15 19:42:37.479048] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad920 is same with the state(5) to be set 00:21:50.757 [2024-12-15 19:42:37.479055] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad920 is same with the state(5) to be set 00:21:50.757 [2024-12-15 19:42:37.479063] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad920 is same with the state(5) to be set 00:21:50.757 [2024-12-15 19:42:37.479071] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad920 is same with the state(5) to be set 00:21:50.757 [2024-12-15 19:42:37.479079] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad920 is same with the state(5) to be set 00:21:50.757 [2024-12-15 19:42:37.479087] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad920 is same with the state(5) to be set 00:21:50.757 [2024-12-15 19:42:37.479095] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad920 is same with the state(5) to be set 00:21:50.757 [2024-12-15 19:42:37.479102] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad920 is same with the state(5) to be set 00:21:50.757 [2024-12-15 19:42:37.479110] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad920 is same with the state(5) to be set 00:21:50.757 [2024-12-15 19:42:37.479118] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad920 is same with the state(5) to be set 00:21:50.757 [2024-12-15 19:42:37.479125] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad920 is same with the state(5) to be set 00:21:50.757 [2024-12-15 19:42:37.479133] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad920 is same with the state(5) to be set 00:21:50.757 [2024-12-15 19:42:37.479140] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad920 is same with the state(5) to be set 00:21:50.757 [2024-12-15 19:42:37.479149] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad920 is same with the state(5) to be set 00:21:50.757 [2024-12-15 19:42:37.479167] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad920 is same with the state(5) to be set 00:21:50.757 [2024-12-15 19:42:37.479190] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad920 is same with the state(5) to be set 00:21:50.757 [2024-12-15 19:42:37.479224] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad920 is same with the state(5) to be set 00:21:50.757 19:42:37 -- host/failover.sh@50 -- # sleep 3 00:21:54.040 19:42:40 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:54.040 [2024-12-15 19:42:40.762892] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:54.040 19:42:40 -- host/failover.sh@55 -- # sleep 1 00:21:54.974 19:42:41 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:55.232 [2024-12-15 19:42:42.060906] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaf040 is same with the state(5) to be set 00:21:55.232 [2024-12-15 19:42:42.060965] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaf040 is same with the state(5) to be set 00:21:55.232 [2024-12-15 19:42:42.060976] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaf040 is same with the state(5) to be set 00:21:55.232 [2024-12-15 19:42:42.060985] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaf040 is same with the state(5) to be set 00:21:55.232 [2024-12-15 19:42:42.061008] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaf040 is same with the state(5) to be set 00:21:55.232 [2024-12-15 19:42:42.061017] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaf040 is same with the state(5) to be set 00:21:55.232 [2024-12-15 19:42:42.061025] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaf040 is same with the state(5) to be set 00:21:55.232 [2024-12-15 19:42:42.061033] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaf040 is same with the state(5) to be set 00:21:55.232 [2024-12-15 19:42:42.061041] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaf040 is same with the state(5) to be set 00:21:55.232 [2024-12-15 19:42:42.061049] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaf040 is same with the state(5) to be set 00:21:55.232 [2024-12-15 19:42:42.061057] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaf040 is same with the state(5) to be set 00:21:55.232 [2024-12-15 19:42:42.061065] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaf040 is same with the state(5) to be set 00:21:55.232 [2024-12-15 19:42:42.061075] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaf040 is same with the state(5) to be set 00:21:55.232 [2024-12-15 19:42:42.061082] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaf040 is same with the state(5) to be set 00:21:55.232 [2024-12-15 19:42:42.061089] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaf040 is same with the state(5) to be set 00:21:55.232 [2024-12-15 19:42:42.061096] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaf040 is same with the state(5) to be set 00:21:55.233 [2024-12-15 19:42:42.061104] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaf040 is same with the state(5) to be set 00:21:55.233 [2024-12-15 19:42:42.061113] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaf040 is same with the state(5) to be set 00:21:55.233 [2024-12-15 19:42:42.061120] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaf040 is same with the state(5) to be set 00:21:55.233 [2024-12-15 19:42:42.061128] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaf040 is same with the state(5) to be set 00:21:55.233 [2024-12-15 19:42:42.061136] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaf040 is same with the state(5) to be set 00:21:55.233 19:42:42 -- host/failover.sh@59 -- # wait 95491 00:22:01.832 0 00:22:01.832 19:42:47 -- host/failover.sh@61 -- # killprocess 95441 00:22:01.832 19:42:47 -- common/autotest_common.sh@936 -- # '[' -z 95441 ']' 00:22:01.832 19:42:47 -- common/autotest_common.sh@940 -- # kill -0 95441 00:22:01.832 19:42:47 -- common/autotest_common.sh@941 -- # uname 00:22:01.832 19:42:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:01.832 19:42:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95441 00:22:01.832 19:42:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:01.832 19:42:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:01.832 killing process with pid 95441 00:22:01.832 19:42:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95441' 00:22:01.832 19:42:47 -- common/autotest_common.sh@955 -- # kill 95441 00:22:01.832 19:42:47 -- common/autotest_common.sh@960 -- # wait 95441 00:22:01.832 19:42:47 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:01.832 [2024-12-15 19:42:30.769492] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:01.832 [2024-12-15 19:42:30.769627] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95441 ] 00:22:01.832 [2024-12-15 19:42:30.915470] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.832 [2024-12-15 19:42:30.994150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.832 Running I/O for 15 seconds... 00:22:01.832 [2024-12-15 19:42:33.829441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.832 [2024-12-15 19:42:33.829511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.832 [2024-12-15 19:42:33.829548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.832 [2024-12-15 19:42:33.829562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.832 [2024-12-15 19:42:33.829583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.832 [2024-12-15 19:42:33.829596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.832 [2024-12-15 19:42:33.829609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.832 [2024-12-15 19:42:33.829641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.832 [2024-12-15 19:42:33.829654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.832 [2024-12-15 19:42:33.829666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.832 [2024-12-15 19:42:33.829681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.832 [2024-12-15 19:42:33.829693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.832 [2024-12-15 19:42:33.829706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.832 [2024-12-15 19:42:33.829718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.832 [2024-12-15 19:42:33.829732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-15 19:42:33.829757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-15 19:42:33.829770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-15 19:42:33.829783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-15 19:42:33.829796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-15 19:42:33.829808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-15 19:42:33.829845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-15 19:42:33.829863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-15 19:42:33.829925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:17320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-15 19:42:33.829941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-15 19:42:33.829963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-15 19:42:33.829976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-15 19:42:33.829990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-15 19:42:33.830003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-15 19:42:33.830016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-15 19:42:33.830029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-15 19:42:33.830058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-15 19:42:33.830070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-15 19:42:33.830084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-15 19:42:33.830103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-15 19:42:33.830117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-15 19:42:33.830135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-15 19:42:33.830150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-15 19:42:33.830162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-15 19:42:33.830175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-15 19:42:33.830203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-15 19:42:33.830248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:17352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-15 19:42:33.830260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-15 19:42:33.830274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-15 19:42:33.830292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-15 19:42:33.830306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-15 19:42:33.830328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-15 19:42:33.830362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-15 19:42:33.830384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-15 19:42:33.830400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-15 19:42:33.830413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-15 19:42:33.830427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-15 19:42:33.830441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-15 19:42:33.830455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-15 19:42:33.830468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-15 19:42:33.830483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-15 19:42:33.830496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-15 19:42:33.830511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.833 [2024-12-15 19:42:33.830524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-15 19:42:33.830538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-15 19:42:33.830551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-15 19:42:33.830566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.833 [2024-12-15 19:42:33.830593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-15 19:42:33.830607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.833 [2024-12-15 19:42:33.830620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-15 19:42:33.830634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.833 [2024-12-15 19:42:33.830679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-15 19:42:33.830709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.833 [2024-12-15 19:42:33.830755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-15 19:42:33.830786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.833 [2024-12-15 19:42:33.830808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-15 19:42:33.830834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-15 19:42:33.830862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-15 19:42:33.830902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-15 19:42:33.830917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-15 19:42:33.830932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-15 19:42:33.830946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-15 19:42:33.830992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-15 19:42:33.831012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-15 19:42:33.831029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-15 19:42:33.831042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-15 19:42:33.831072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-15 19:42:33.831085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-15 19:42:33.831100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-15 19:42:33.831113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-15 19:42:33.831128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-15 19:42:33.831140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-15 19:42:33.831155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-15 19:42:33.831169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-15 19:42:33.831210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-15 19:42:33.831222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-15 19:42:33.831237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.833 [2024-12-15 19:42:33.831249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-15 19:42:33.831263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-15 19:42:33.831276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-15 19:42:33.831294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:17592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.834 [2024-12-15 19:42:33.831307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-15 19:42:33.831336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.834 [2024-12-15 19:42:33.831354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-15 19:42:33.831376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.834 [2024-12-15 19:42:33.831395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-15 19:42:33.831409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-15 19:42:33.831422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-15 19:42:33.831435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.834 [2024-12-15 19:42:33.831479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-15 19:42:33.831493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-15 19:42:33.831505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-15 19:42:33.831519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-15 19:42:33.831531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-15 19:42:33.831545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:17648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.834 [2024-12-15 19:42:33.831558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-15 19:42:33.831571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:17032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-15 19:42:33.831584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-15 19:42:33.831597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-15 19:42:33.831610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-15 19:42:33.831624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-15 19:42:33.831636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-15 19:42:33.831649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-15 19:42:33.831661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-15 19:42:33.831675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-15 19:42:33.831687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-15 19:42:33.831701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-15 19:42:33.831713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-15 19:42:33.831727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-15 19:42:33.831746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-15 19:42:33.831761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-15 19:42:33.831773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-15 19:42:33.831786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-15 19:42:33.831798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-15 19:42:33.831812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.834 [2024-12-15 19:42:33.831863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-15 19:42:33.831878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.834 [2024-12-15 19:42:33.831896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-15 19:42:33.831923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-15 19:42:33.831938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-15 19:42:33.831953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.834 [2024-12-15 19:42:33.831966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-15 19:42:33.831980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.834 [2024-12-15 19:42:33.831993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-15 19:42:33.832007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:17704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-15 19:42:33.832020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-15 19:42:33.832034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.834 [2024-12-15 19:42:33.832046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-15 19:42:33.832061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-15 19:42:33.832073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-15 19:42:33.832104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.834 [2024-12-15 19:42:33.832117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-15 19:42:33.832132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-15 19:42:33.832145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-15 19:42:33.832167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-15 19:42:33.832181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-15 19:42:33.832211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-15 19:42:33.832233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-15 19:42:33.832247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-15 19:42:33.832259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-15 19:42:33.832283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-15 19:42:33.832296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-15 19:42:33.832317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-15 19:42:33.832330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-15 19:42:33.832344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-15 19:42:33.832356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-15 19:42:33.832370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-15 19:42:33.832397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-15 19:42:33.832411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.834 [2024-12-15 19:42:33.832429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-15 19:42:33.832444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-15 19:42:33.832457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-15 19:42:33.832471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-15 19:42:33.832483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-15 19:42:33.832497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-15 19:42:33.832509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-15 19:42:33.832538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.834 [2024-12-15 19:42:33.832551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-15 19:42:33.832565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.834 [2024-12-15 19:42:33.832584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-15 19:42:33.832599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.834 [2024-12-15 19:42:33.832611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-15 19:42:33.832625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-15 19:42:33.832637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-15 19:42:33.832651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-15 19:42:33.832663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-15 19:42:33.832676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.835 [2024-12-15 19:42:33.832689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-15 19:42:33.832711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.835 [2024-12-15 19:42:33.832739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-15 19:42:33.832753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.835 [2024-12-15 19:42:33.832766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-15 19:42:33.832779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.835 [2024-12-15 19:42:33.832792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-15 19:42:33.832806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-15 19:42:33.832818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-15 19:42:33.832849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.835 [2024-12-15 19:42:33.832862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-15 19:42:33.832877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-15 19:42:33.832929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-15 19:42:33.832950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-15 19:42:33.832970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-15 19:42:33.832986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-15 19:42:33.833016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-15 19:42:33.833032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-15 19:42:33.833054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-15 19:42:33.833071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-15 19:42:33.833085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-15 19:42:33.833115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-15 19:42:33.833129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-15 19:42:33.833144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-15 19:42:33.833159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-15 19:42:33.833180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-15 19:42:33.833194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-15 19:42:33.833210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-15 19:42:33.833238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-15 19:42:33.833260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.835 [2024-12-15 19:42:33.833273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-15 19:42:33.833288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.835 [2024-12-15 19:42:33.833312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-15 19:42:33.833343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.835 [2024-12-15 19:42:33.833382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-15 19:42:33.833407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-15 19:42:33.833420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-15 19:42:33.833434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.835 [2024-12-15 19:42:33.833447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-15 19:42:33.833461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-15 19:42:33.833474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-15 19:42:33.833488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.835 [2024-12-15 19:42:33.833502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-15 19:42:33.833525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-15 19:42:33.833544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-15 19:42:33.833559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-15 19:42:33.833577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-15 19:42:33.833603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.835 [2024-12-15 19:42:33.833615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-15 19:42:33.833629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.835 [2024-12-15 19:42:33.833642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-15 19:42:33.833655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-15 19:42:33.833667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-15 19:42:33.833681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-15 19:42:33.833694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-15 19:42:33.833708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-15 19:42:33.833730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-15 19:42:33.833744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.835 [2024-12-15 19:42:33.833767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-15 19:42:33.833780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-15 19:42:33.833793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-15 19:42:33.833807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-15 19:42:33.833819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-15 19:42:33.833833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-15 19:42:33.833845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-15 19:42:33.833859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-15 19:42:33.833872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-15 19:42:33.833886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-15 19:42:33.833916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-15 19:42:33.833944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-15 19:42:33.833958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-15 19:42:33.833972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-15 19:42:33.833985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-15 19:42:33.833998] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c3d40 is same with the state(5) to be set 00:22:01.835 [2024-12-15 19:42:33.834026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.835 [2024-12-15 19:42:33.834036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.835 [2024-12-15 19:42:33.834053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17488 len:8 PRP1 0x0 PRP2 0x0 00:22:01.836 [2024-12-15 19:42:33.834066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-15 19:42:33.834137] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6c3d40 was disconnected and freed. reset controller. 00:22:01.836 [2024-12-15 19:42:33.834155] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:01.836 [2024-12-15 19:42:33.834220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.836 [2024-12-15 19:42:33.834241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-15 19:42:33.834255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.836 [2024-12-15 19:42:33.834278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-15 19:42:33.834291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.836 [2024-12-15 19:42:33.834305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-15 19:42:33.834327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.836 [2024-12-15 19:42:33.834370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-15 19:42:33.834383] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.836 [2024-12-15 19:42:33.834442] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x691940 (9): Bad file descriptor 00:22:01.836 [2024-12-15 19:42:33.836637] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.836 [2024-12-15 19:42:33.866080] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:01.836 [2024-12-15 19:42:37.479317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-15 19:42:37.479375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-15 19:42:37.479400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:82520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-15 19:42:37.479443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-15 19:42:37.479458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-15 19:42:37.479471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-15 19:42:37.479484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-15 19:42:37.479496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-15 19:42:37.479509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-15 19:42:37.479520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-15 19:42:37.479533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:82576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-15 19:42:37.479544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-15 19:42:37.479557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:82584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-15 19:42:37.479569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-15 19:42:37.479582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:82592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-15 19:42:37.479593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-15 19:42:37.479606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-15 19:42:37.479618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-15 19:42:37.479631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-15 19:42:37.479642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-15 19:42:37.479655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:83160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-15 19:42:37.479666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-15 19:42:37.479679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:83176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-15 19:42:37.479691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-15 19:42:37.479704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:83192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-15 19:42:37.479715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-15 19:42:37.479728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-15 19:42:37.479739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-15 19:42:37.479759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-15 19:42:37.479772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-15 19:42:37.479785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:82616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-15 19:42:37.479796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-15 19:42:37.479817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-15 19:42:37.479863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-15 19:42:37.479877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:82640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-15 19:42:37.479973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-15 19:42:37.479992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-15 19:42:37.480015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-15 19:42:37.480041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:82680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-15 19:42:37.480071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-15 19:42:37.480087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:82688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-15 19:42:37.480100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-15 19:42:37.480114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:82696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-15 19:42:37.480126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-15 19:42:37.480140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:82712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-15 19:42:37.480153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-15 19:42:37.480167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-15 19:42:37.480179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-15 19:42:37.480208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-15 19:42:37.480222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-15 19:42:37.480236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-15 19:42:37.480258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-15 19:42:37.480282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-15 19:42:37.480305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-15 19:42:37.480320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-15 19:42:37.480333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-15 19:42:37.480347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-15 19:42:37.480359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-15 19:42:37.480389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-15 19:42:37.480421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-15 19:42:37.480435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-15 19:42:37.480448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-15 19:42:37.480462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.836 [2024-12-15 19:42:37.480475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-15 19:42:37.480489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:83384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.837 [2024-12-15 19:42:37.480512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-15 19:42:37.480527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.837 [2024-12-15 19:42:37.480539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-15 19:42:37.480553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:83400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-15 19:42:37.480566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-15 19:42:37.480581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:83408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-15 19:42:37.480593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-15 19:42:37.480607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.837 [2024-12-15 19:42:37.480619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-15 19:42:37.480633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:82720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-15 19:42:37.480646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-15 19:42:37.480660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-15 19:42:37.480674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-15 19:42:37.480688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:82752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-15 19:42:37.480707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-15 19:42:37.480723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:82760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-15 19:42:37.480735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-15 19:42:37.480749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-15 19:42:37.480762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-15 19:42:37.480792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:82784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-15 19:42:37.480820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-15 19:42:37.480877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-15 19:42:37.480906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-15 19:42:37.480998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:82800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-15 19:42:37.481021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-15 19:42:37.481037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:82808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-15 19:42:37.481050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-15 19:42:37.481065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-15 19:42:37.481078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-15 19:42:37.481093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:82824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-15 19:42:37.481106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-15 19:42:37.481121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-15 19:42:37.481142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-15 19:42:37.481157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:82840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-15 19:42:37.481187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-15 19:42:37.481216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:82856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-15 19:42:37.481253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-15 19:42:37.481308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:82864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-15 19:42:37.481320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-15 19:42:37.481343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:82888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-15 19:42:37.481356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-15 19:42:37.481369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:83424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-15 19:42:37.481381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-15 19:42:37.481395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.837 [2024-12-15 19:42:37.481408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-15 19:42:37.481421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.837 [2024-12-15 19:42:37.481434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-15 19:42:37.481447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:82912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-15 19:42:37.481459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-15 19:42:37.481472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:82928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-15 19:42:37.481484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-15 19:42:37.481498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:82936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-15 19:42:37.481510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-15 19:42:37.481523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:82944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-15 19:42:37.481535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-15 19:42:37.481548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:82960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-15 19:42:37.481561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-15 19:42:37.481576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-15 19:42:37.481588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-15 19:42:37.481628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:83000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-15 19:42:37.481675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-15 19:42:37.481688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:83008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-15 19:42:37.481704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-15 19:42:37.481718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:83448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-15 19:42:37.481741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-15 19:42:37.481755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.837 [2024-12-15 19:42:37.481767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-15 19:42:37.481788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-15 19:42:37.481800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-15 19:42:37.481813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.838 [2024-12-15 19:42:37.481840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-15 19:42:37.481854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:83480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-15 19:42:37.481866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-15 19:42:37.481880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.838 [2024-12-15 19:42:37.481892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-15 19:42:37.481906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-15 19:42:37.481918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-15 19:42:37.481947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.838 [2024-12-15 19:42:37.481961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-15 19:42:37.481975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:83512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-15 19:42:37.481987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-15 19:42:37.482001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.838 [2024-12-15 19:42:37.482013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-15 19:42:37.482044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-15 19:42:37.482090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-15 19:42:37.482115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-15 19:42:37.482143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-15 19:42:37.482169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-15 19:42:37.482182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-15 19:42:37.482205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:83552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.838 [2024-12-15 19:42:37.482220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-15 19:42:37.482235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.838 [2024-12-15 19:42:37.482248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-15 19:42:37.482262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-15 19:42:37.482275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-15 19:42:37.482290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.838 [2024-12-15 19:42:37.482309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-15 19:42:37.482364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:83584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-15 19:42:37.482380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-15 19:42:37.482395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-15 19:42:37.482409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-15 19:42:37.482424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-15 19:42:37.482453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-15 19:42:37.482467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-15 19:42:37.482480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-15 19:42:37.482509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-15 19:42:37.482522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-15 19:42:37.482536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:83048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-15 19:42:37.482549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-15 19:42:37.482568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:83056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-15 19:42:37.482581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-15 19:42:37.482596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-15 19:42:37.482620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-15 19:42:37.482635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:83072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-15 19:42:37.482648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-15 19:42:37.482699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:83080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-15 19:42:37.482713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-15 19:42:37.482728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-15 19:42:37.482740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-15 19:42:37.482754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:83120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-15 19:42:37.482766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-15 19:42:37.482780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:83616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-15 19:42:37.482793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-15 19:42:37.482806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.838 [2024-12-15 19:42:37.482819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-15 19:42:37.482845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:83632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-15 19:42:37.482857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-15 19:42:37.482871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-15 19:42:37.482889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-15 19:42:37.482903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:83648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-15 19:42:37.482917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-15 19:42:37.482942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:83656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-15 19:42:37.482956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-15 19:42:37.482970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.838 [2024-12-15 19:42:37.482982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-15 19:42:37.482997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.838 [2024-12-15 19:42:37.483009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-15 19:42:37.483037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:83680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-15 19:42:37.483050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-15 19:42:37.483079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:83688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-15 19:42:37.483099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-15 19:42:37.483115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.838 [2024-12-15 19:42:37.483128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-15 19:42:37.483141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-15 19:42:37.483169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-15 19:42:37.483183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.838 [2024-12-15 19:42:37.483194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-15 19:42:37.483207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:83720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-15 19:42:37.483219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-15 19:42:37.483232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.839 [2024-12-15 19:42:37.483244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-15 19:42:37.483257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-15 19:42:37.483285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-15 19:42:37.483315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-15 19:42:37.483327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-15 19:42:37.483341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:83168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-15 19:42:37.483353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-15 19:42:37.483367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:83184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-15 19:42:37.483379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-15 19:42:37.483394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-15 19:42:37.483411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-15 19:42:37.483425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:83224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-15 19:42:37.483438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-15 19:42:37.483452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-15 19:42:37.483464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-15 19:42:37.483483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-15 19:42:37.483496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-15 19:42:37.483526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.839 [2024-12-15 19:42:37.483555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-15 19:42:37.483586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.839 [2024-12-15 19:42:37.483600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-15 19:42:37.483615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:83752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-15 19:42:37.483628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-15 19:42:37.483671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:83760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-15 19:42:37.483684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-15 19:42:37.483699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-15 19:42:37.483711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-15 19:42:37.483738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-15 19:42:37.483751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-15 19:42:37.483765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-15 19:42:37.483779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-15 19:42:37.483793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:83320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-15 19:42:37.483806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-15 19:42:37.483836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:83328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-15 19:42:37.483865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-15 19:42:37.483881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-15 19:42:37.483909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-15 19:42:37.483925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:83352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-15 19:42:37.483938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-15 19:42:37.483967] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69dd90 is same with the state(5) to be set 00:22:01.839 [2024-12-15 19:42:37.483992] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.839 [2024-12-15 19:42:37.484003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.839 [2024-12-15 19:42:37.484031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83360 len:8 PRP1 0x0 PRP2 0x0 00:22:01.839 [2024-12-15 19:42:37.484047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-15 19:42:37.484120] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x69dd90 was disconnected and freed. reset controller. 00:22:01.839 [2024-12-15 19:42:37.484139] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:01.839 [2024-12-15 19:42:37.484287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.839 [2024-12-15 19:42:37.484307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-15 19:42:37.484321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.839 [2024-12-15 19:42:37.484333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-15 19:42:37.484346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.839 [2024-12-15 19:42:37.484358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-15 19:42:37.484370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.839 [2024-12-15 19:42:37.484382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-15 19:42:37.484395] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.839 [2024-12-15 19:42:37.484445] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x691940 (9): Bad file descriptor 00:22:01.839 [2024-12-15 19:42:37.487142] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.839 [2024-12-15 19:42:37.518416] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:01.839 [2024-12-15 19:42:42.061296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:128912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-15 19:42:42.061369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-15 19:42:42.061397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-15 19:42:42.061411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-15 19:42:42.061424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-15 19:42:42.061437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-15 19:42:42.061450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:128968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-15 19:42:42.061462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-15 19:42:42.061475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:128984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-15 19:42:42.061511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-15 19:42:42.061527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:129000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-15 19:42:42.061539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-15 19:42:42.061552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:129016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-15 19:42:42.061564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-15 19:42:42.061579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:129040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-15 19:42:42.061591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-15 19:42:42.061604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:129048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-15 19:42:42.061616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-15 19:42:42.061630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:129072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-15 19:42:42.061642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-15 19:42:42.061655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:129080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-15 19:42:42.061666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-15 19:42:42.061679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:129088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-15 19:42:42.061690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-15 19:42:42.061703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:129096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.840 [2024-12-15 19:42:42.061715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.840 [2024-12-15 19:42:42.061728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:129112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.840 [2024-12-15 19:42:42.061740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.840 [2024-12-15 19:42:42.061752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:129120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.840 [2024-12-15 19:42:42.061764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.840 [2024-12-15 19:42:42.061777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:129128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.840 [2024-12-15 19:42:42.061788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.840 [2024-12-15 19:42:42.061801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:129568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.840 [2024-12-15 19:42:42.061817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.840 [2024-12-15 19:42:42.061876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:129608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.840 [2024-12-15 19:42:42.061908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.840 [2024-12-15 19:42:42.061937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:129624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.840 [2024-12-15 19:42:42.061951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.840 [2024-12-15 19:42:42.061965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:129640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.840 [2024-12-15 19:42:42.061978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.840 [2024-12-15 19:42:42.061992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:129696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.840 [2024-12-15 19:42:42.062015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.840 [2024-12-15 19:42:42.062028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:129704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.840 [2024-12-15 19:42:42.062041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.840 [2024-12-15 19:42:42.062055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:129712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.840 [2024-12-15 19:42:42.062068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.840 [2024-12-15 19:42:42.062082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:129720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.840 [2024-12-15 19:42:42.062094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.840 [2024-12-15 19:42:42.062109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:129728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.840 [2024-12-15 19:42:42.062122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.840 [2024-12-15 19:42:42.062136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:129736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.840 [2024-12-15 19:42:42.062148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.840 [2024-12-15 19:42:42.062162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:129744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.840 [2024-12-15 19:42:42.062174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.840 [2024-12-15 19:42:42.062189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:129752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.840 [2024-12-15 19:42:42.062226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.840 [2024-12-15 19:42:42.062256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:129760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.840 [2024-12-15 19:42:42.062268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.840 [2024-12-15 19:42:42.062280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:129768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.840 [2024-12-15 19:42:42.062293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.840 [2024-12-15 19:42:42.062314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:129776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.840 [2024-12-15 19:42:42.062348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.840 [2024-12-15 19:42:42.062363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:129784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.840 [2024-12-15 19:42:42.062375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.840 [2024-12-15 19:42:42.062388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:129792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.840 [2024-12-15 19:42:42.062401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.840 [2024-12-15 19:42:42.062414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:129800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.840 [2024-12-15 19:42:42.062426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.840 [2024-12-15 19:42:42.062439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:129808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.840 [2024-12-15 19:42:42.062450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.840 [2024-12-15 19:42:42.062463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:129816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.840 [2024-12-15 19:42:42.062476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.840 [2024-12-15 19:42:42.062488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:129824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.840 [2024-12-15 19:42:42.062501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.840 [2024-12-15 19:42:42.062514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:129832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.840 [2024-12-15 19:42:42.062526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.840 [2024-12-15 19:42:42.062540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:129136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.840 [2024-12-15 19:42:42.062551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.840 [2024-12-15 19:42:42.062564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:129152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.840 [2024-12-15 19:42:42.062576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.840 [2024-12-15 19:42:42.062589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:129168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.840 [2024-12-15 19:42:42.062601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.840 [2024-12-15 19:42:42.062614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:129256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.840 [2024-12-15 19:42:42.062626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.840 [2024-12-15 19:42:42.062639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:129264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.840 [2024-12-15 19:42:42.062669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.840 [2024-12-15 19:42:42.062693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:129320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.840 [2024-12-15 19:42:42.062705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.840 [2024-12-15 19:42:42.062718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:129368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.840 [2024-12-15 19:42:42.062729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.840 [2024-12-15 19:42:42.062742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:129376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.840 [2024-12-15 19:42:42.062755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.840 [2024-12-15 19:42:42.062768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:129384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.840 [2024-12-15 19:42:42.062791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.840 [2024-12-15 19:42:42.062804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:129392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.840 [2024-12-15 19:42:42.062825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.840 [2024-12-15 19:42:42.062859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:129424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.840 [2024-12-15 19:42:42.062872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.840 [2024-12-15 19:42:42.062886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:129432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.840 [2024-12-15 19:42:42.062899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.840 [2024-12-15 19:42:42.062913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:129464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.840 [2024-12-15 19:42:42.062925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.840 [2024-12-15 19:42:42.062938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:129472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.840 [2024-12-15 19:42:42.062950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.840 [2024-12-15 19:42:42.062963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:129496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.840 [2024-12-15 19:42:42.062975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.841 [2024-12-15 19:42:42.062990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:129528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.841 [2024-12-15 19:42:42.063014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.841 [2024-12-15 19:42:42.063027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:129840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.841 [2024-12-15 19:42:42.063039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.841 [2024-12-15 19:42:42.063060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:129848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.841 [2024-12-15 19:42:42.063073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.841 [2024-12-15 19:42:42.063087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:129856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.841 [2024-12-15 19:42:42.063099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.841 [2024-12-15 19:42:42.063113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:129864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.841 [2024-12-15 19:42:42.063125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.841 [2024-12-15 19:42:42.063138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:129872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.841 [2024-12-15 19:42:42.063150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.841 [2024-12-15 19:42:42.063163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:129880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.841 [2024-12-15 19:42:42.063175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.841 [2024-12-15 19:42:42.063215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:129888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.841 [2024-12-15 19:42:42.063228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.841 [2024-12-15 19:42:42.063242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:129896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.841 [2024-12-15 19:42:42.063253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.841 [2024-12-15 19:42:42.063266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:129904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.841 [2024-12-15 19:42:42.063278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.841 [2024-12-15 19:42:42.063291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:129912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.841 [2024-12-15 19:42:42.063304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.841 [2024-12-15 19:42:42.063316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:129920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.841 [2024-12-15 19:42:42.063334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.841 [2024-12-15 19:42:42.063347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:129928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.841 [2024-12-15 19:42:42.063360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.841 [2024-12-15 19:42:42.063373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:129936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.841 [2024-12-15 19:42:42.063386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.841 [2024-12-15 19:42:42.063399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:129944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.841 [2024-12-15 19:42:42.063417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.841 [2024-12-15 19:42:42.063431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:129952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.841 [2024-12-15 19:42:42.063444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.841 [2024-12-15 19:42:42.063457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:129960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.841 [2024-12-15 19:42:42.063469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.841 [2024-12-15 19:42:42.063483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:129968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.841 [2024-12-15 19:42:42.063495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.841 [2024-12-15 19:42:42.063509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:129976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.841 [2024-12-15 19:42:42.063520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.841 [2024-12-15 19:42:42.063533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:129984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.841 [2024-12-15 19:42:42.063546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.841 [2024-12-15 19:42:42.063559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:129992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.841 [2024-12-15 19:42:42.063572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.841 [2024-12-15 19:42:42.063586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:130000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.841 [2024-12-15 19:42:42.063598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.841 [2024-12-15 19:42:42.063611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:130008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.841 [2024-12-15 19:42:42.063624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.841 [2024-12-15 19:42:42.063637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:130016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.841 [2024-12-15 19:42:42.063650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.841 [2024-12-15 19:42:42.063664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:130024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.841 [2024-12-15 19:42:42.063676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.841 [2024-12-15 19:42:42.063689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:130032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.841 [2024-12-15 19:42:42.063701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.841 [2024-12-15 19:42:42.063714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:130040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.841 [2024-12-15 19:42:42.063726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.841 [2024-12-15 19:42:42.063746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:130048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.841 [2024-12-15 19:42:42.063758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.841 [2024-12-15 19:42:42.063771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.841 [2024-12-15 19:42:42.063784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.841 [2024-12-15 19:42:42.063797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:130064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.841 [2024-12-15 19:42:42.063809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.841 [2024-12-15 19:42:42.063822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:130072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.841 [2024-12-15 19:42:42.063833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.841 [2024-12-15 19:42:42.063863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:130080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.841 [2024-12-15 19:42:42.063890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.841 [2024-12-15 19:42:42.063904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:130088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.841 [2024-12-15 19:42:42.063916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.841 [2024-12-15 19:42:42.063930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:130096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.841 [2024-12-15 19:42:42.063941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.841 [2024-12-15 19:42:42.063955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:130104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.841 [2024-12-15 19:42:42.063966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.841 [2024-12-15 19:42:42.063980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:130112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.842 [2024-12-15 19:42:42.063993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.842 [2024-12-15 19:42:42.064015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:130120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.842 [2024-12-15 19:42:42.064027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.842 [2024-12-15 19:42:42.064041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:130128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.842 [2024-12-15 19:42:42.064053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.842 [2024-12-15 19:42:42.064066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:130136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.842 [2024-12-15 19:42:42.064078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.842 [2024-12-15 19:42:42.064091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:130144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.842 [2024-12-15 19:42:42.064117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.842 [2024-12-15 19:42:42.064133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:130152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.842 [2024-12-15 19:42:42.064145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.842 [2024-12-15 19:42:42.064158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:130160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.842 [2024-12-15 19:42:42.064170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.842 [2024-12-15 19:42:42.064183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:130168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.842 [2024-12-15 19:42:42.064195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.842 [2024-12-15 19:42:42.064208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:130176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.842 [2024-12-15 19:42:42.064220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.842 [2024-12-15 19:42:42.064233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:130184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.842 [2024-12-15 19:42:42.064245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.842 [2024-12-15 19:42:42.064258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:130192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.842 [2024-12-15 19:42:42.064269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.842 [2024-12-15 19:42:42.064282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:130200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.842 [2024-12-15 19:42:42.064294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.842 [2024-12-15 19:42:42.064313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.842 [2024-12-15 19:42:42.064325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.842 [2024-12-15 19:42:42.064350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:130216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.842 [2024-12-15 19:42:42.064362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.842 [2024-12-15 19:42:42.064383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:130224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.842 [2024-12-15 19:42:42.064395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.842 [2024-12-15 19:42:42.064416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.842 [2024-12-15 19:42:42.064428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.842 [2024-12-15 19:42:42.064441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:130240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.842 [2024-12-15 19:42:42.064453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.842 [2024-12-15 19:42:42.064472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:130248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.842 [2024-12-15 19:42:42.064485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.842 [2024-12-15 19:42:42.064498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:130256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.842 [2024-12-15 19:42:42.064510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.842 [2024-12-15 19:42:42.064523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:130264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.842 [2024-12-15 19:42:42.064534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.842 [2024-12-15 19:42:42.064547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:130272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.842 [2024-12-15 19:42:42.064565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.842 [2024-12-15 19:42:42.064578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:130280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.842 [2024-12-15 19:42:42.064589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.842 [2024-12-15 19:42:42.064603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:130288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.842 [2024-12-15 19:42:42.064614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.842 [2024-12-15 19:42:42.064627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:130296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.842 [2024-12-15 19:42:42.064639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.842 [2024-12-15 19:42:42.064652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:129536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.842 [2024-12-15 19:42:42.064664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.842 [2024-12-15 19:42:42.064677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:129544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.842 [2024-12-15 19:42:42.064689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.842 [2024-12-15 19:42:42.064702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:129552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.842 [2024-12-15 19:42:42.064713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.842 [2024-12-15 19:42:42.064726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:129560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.842 [2024-12-15 19:42:42.064738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.842 [2024-12-15 19:42:42.064757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:129576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.842 [2024-12-15 19:42:42.064769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.842 [2024-12-15 19:42:42.064782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:129584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.842 [2024-12-15 19:42:42.064794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.842 [2024-12-15 19:42:42.064823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:129592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.842 [2024-12-15 19:42:42.064838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.842 [2024-12-15 19:42:42.064852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:129600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.842 [2024-12-15 19:42:42.064864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.842 [2024-12-15 19:42:42.064877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:129616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.842 [2024-12-15 19:42:42.064889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.842 [2024-12-15 19:42:42.064902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:129632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.842 [2024-12-15 19:42:42.064914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.842 [2024-12-15 19:42:42.064927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:129648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.842 [2024-12-15 19:42:42.064939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.842 [2024-12-15 19:42:42.064951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:129656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.842 [2024-12-15 19:42:42.064963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.842 [2024-12-15 19:42:42.064976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:129664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.842 [2024-12-15 19:42:42.064994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.842 [2024-12-15 19:42:42.065020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:129672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.842 [2024-12-15 19:42:42.065032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.842 [2024-12-15 19:42:42.065045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:129680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.842 [2024-12-15 19:42:42.065056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.842 [2024-12-15 19:42:42.065069] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c4d50 is same with the state(5) to be set 00:22:01.842 [2024-12-15 19:42:42.065084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.842 [2024-12-15 19:42:42.065094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.842 [2024-12-15 19:42:42.065103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129688 len:8 PRP1 0x0 PRP2 0x0 00:22:01.843 [2024-12-15 19:42:42.065114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.843 [2024-12-15 19:42:42.065179] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6c4d50 was disconnected and freed. reset controller. 00:22:01.843 [2024-12-15 19:42:42.065213] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:01.843 [2024-12-15 19:42:42.065270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.843 [2024-12-15 19:42:42.065301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.843 [2024-12-15 19:42:42.065326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.843 [2024-12-15 19:42:42.065354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.843 [2024-12-15 19:42:42.065367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.843 [2024-12-15 19:42:42.065385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.843 [2024-12-15 19:42:42.065398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.843 [2024-12-15 19:42:42.065409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.843 [2024-12-15 19:42:42.065421] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.843 [2024-12-15 19:42:42.067544] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.843 [2024-12-15 19:42:42.067584] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x691940 (9): Bad file descriptor 00:22:01.843 [2024-12-15 19:42:42.084108] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:01.843 00:22:01.843 Latency(us) 00:22:01.843 [2024-12-15T19:42:48.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.843 [2024-12-15T19:42:48.739Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:01.843 Verification LBA range: start 0x0 length 0x4000 00:22:01.843 NVMe0n1 : 15.01 13874.05 54.20 266.35 0.00 9036.29 580.89 15490.33 00:22:01.843 [2024-12-15T19:42:48.739Z] =================================================================================================================== 00:22:01.843 [2024-12-15T19:42:48.739Z] Total : 13874.05 54.20 266.35 0.00 9036.29 580.89 15490.33 00:22:01.843 Received shutdown signal, test time was about 15.000000 seconds 00:22:01.843 00:22:01.843 Latency(us) 00:22:01.843 [2024-12-15T19:42:48.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.843 [2024-12-15T19:42:48.739Z] =================================================================================================================== 00:22:01.843 [2024-12-15T19:42:48.739Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:01.843 19:42:47 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:01.843 19:42:47 -- host/failover.sh@65 -- # count=3 00:22:01.843 19:42:47 -- host/failover.sh@67 -- # (( count != 3 )) 00:22:01.843 19:42:47 -- host/failover.sh@73 -- # bdevperf_pid=95696 00:22:01.843 19:42:47 -- host/failover.sh@75 -- # waitforlisten 95696 /var/tmp/bdevperf.sock 00:22:01.843 19:42:47 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:01.843 19:42:47 -- common/autotest_common.sh@829 -- # '[' -z 95696 ']' 00:22:01.843 19:42:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:01.843 19:42:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:01.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:01.843 19:42:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:01.843 19:42:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:01.843 19:42:47 -- common/autotest_common.sh@10 -- # set +x 00:22:02.410 19:42:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:02.410 19:42:49 -- common/autotest_common.sh@862 -- # return 0 00:22:02.410 19:42:49 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:02.410 [2024-12-15 19:42:49.224062] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:02.410 19:42:49 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:02.668 [2024-12-15 19:42:49.444291] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:02.668 19:42:49 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:02.926 NVMe0n1 00:22:02.926 19:42:49 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:03.494 00:22:03.494 19:42:50 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:03.752 00:22:03.752 19:42:50 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:03.752 19:42:50 -- host/failover.sh@82 -- # grep -q NVMe0 00:22:04.010 19:42:50 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:04.269 19:42:50 -- host/failover.sh@87 -- # sleep 3 00:22:07.558 19:42:53 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:07.558 19:42:53 -- host/failover.sh@88 -- # grep -q NVMe0 00:22:07.558 19:42:54 -- host/failover.sh@90 -- # run_test_pid=95834 00:22:07.558 19:42:54 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:07.558 19:42:54 -- host/failover.sh@92 -- # wait 95834 00:22:08.493 0 00:22:08.493 19:42:55 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:08.493 [2024-12-15 19:42:48.017268] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:08.493 [2024-12-15 19:42:48.017386] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95696 ] 00:22:08.493 [2024-12-15 19:42:48.152736] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.493 [2024-12-15 19:42:48.224476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.493 [2024-12-15 19:42:50.927749] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:08.493 [2024-12-15 19:42:50.927902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.493 [2024-12-15 19:42:50.927928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.493 [2024-12-15 19:42:50.927946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.493 [2024-12-15 19:42:50.927960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.493 [2024-12-15 19:42:50.927973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.493 [2024-12-15 19:42:50.927987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.493 [2024-12-15 19:42:50.928008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.493 [2024-12-15 19:42:50.928026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.493 [2024-12-15 19:42:50.928041] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:08.493 [2024-12-15 19:42:50.928085] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:08.493 [2024-12-15 19:42:50.928124] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x196e940 (9): Bad file descriptor 00:22:08.493 [2024-12-15 19:42:50.934985] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:08.493 Running I/O for 1 seconds... 00:22:08.493 00:22:08.493 Latency(us) 00:22:08.493 [2024-12-15T19:42:55.389Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:08.493 [2024-12-15T19:42:55.389Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:08.493 Verification LBA range: start 0x0 length 0x4000 00:22:08.493 NVMe0n1 : 1.01 14564.90 56.89 0.00 0.00 8750.84 1392.64 9770.82 00:22:08.493 [2024-12-15T19:42:55.389Z] =================================================================================================================== 00:22:08.493 [2024-12-15T19:42:55.389Z] Total : 14564.90 56.89 0.00 0.00 8750.84 1392.64 9770.82 00:22:08.493 19:42:55 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:08.493 19:42:55 -- host/failover.sh@95 -- # grep -q NVMe0 00:22:08.751 19:42:55 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:09.009 19:42:55 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:09.009 19:42:55 -- host/failover.sh@99 -- # grep -q NVMe0 00:22:09.267 19:42:56 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:09.526 19:42:56 -- host/failover.sh@101 -- # sleep 3 00:22:12.810 19:42:59 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:12.810 19:42:59 -- host/failover.sh@103 -- # grep -q NVMe0 00:22:12.810 19:42:59 -- host/failover.sh@108 -- # killprocess 95696 00:22:12.810 19:42:59 -- common/autotest_common.sh@936 -- # '[' -z 95696 ']' 00:22:12.810 19:42:59 -- common/autotest_common.sh@940 -- # kill -0 95696 00:22:12.810 19:42:59 -- common/autotest_common.sh@941 -- # uname 00:22:12.810 19:42:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:12.810 19:42:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95696 00:22:13.069 killing process with pid 95696 00:22:13.069 19:42:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:13.069 19:42:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:13.069 19:42:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95696' 00:22:13.069 19:42:59 -- common/autotest_common.sh@955 -- # kill 95696 00:22:13.069 19:42:59 -- common/autotest_common.sh@960 -- # wait 95696 00:22:13.327 19:42:59 -- host/failover.sh@110 -- # sync 00:22:13.327 19:43:00 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:13.585 19:43:00 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:13.585 19:43:00 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:13.585 19:43:00 -- host/failover.sh@116 -- # nvmftestfini 00:22:13.585 19:43:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:13.585 19:43:00 -- nvmf/common.sh@116 -- # sync 00:22:13.585 19:43:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:13.585 19:43:00 -- nvmf/common.sh@119 -- # set +e 00:22:13.585 19:43:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:13.585 19:43:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:13.585 rmmod nvme_tcp 00:22:13.585 rmmod nvme_fabrics 00:22:13.585 rmmod nvme_keyring 00:22:13.585 19:43:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:13.585 19:43:00 -- nvmf/common.sh@123 -- # set -e 00:22:13.585 19:43:00 -- nvmf/common.sh@124 -- # return 0 00:22:13.585 19:43:00 -- nvmf/common.sh@477 -- # '[' -n 95325 ']' 00:22:13.585 19:43:00 -- nvmf/common.sh@478 -- # killprocess 95325 00:22:13.585 19:43:00 -- common/autotest_common.sh@936 -- # '[' -z 95325 ']' 00:22:13.585 19:43:00 -- common/autotest_common.sh@940 -- # kill -0 95325 00:22:13.585 19:43:00 -- common/autotest_common.sh@941 -- # uname 00:22:13.585 19:43:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:13.585 19:43:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95325 00:22:13.585 killing process with pid 95325 00:22:13.585 19:43:00 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:13.585 19:43:00 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:13.585 19:43:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95325' 00:22:13.586 19:43:00 -- common/autotest_common.sh@955 -- # kill 95325 00:22:13.586 19:43:00 -- common/autotest_common.sh@960 -- # wait 95325 00:22:14.153 19:43:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:14.153 19:43:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:14.153 19:43:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:14.153 19:43:00 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:14.153 19:43:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:14.153 19:43:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.153 19:43:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:14.153 19:43:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.153 19:43:00 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:14.153 00:22:14.153 real 0m33.409s 00:22:14.153 user 2m9.449s 00:22:14.153 sys 0m5.158s 00:22:14.153 19:43:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:14.153 19:43:00 -- common/autotest_common.sh@10 -- # set +x 00:22:14.153 ************************************ 00:22:14.153 END TEST nvmf_failover 00:22:14.153 ************************************ 00:22:14.153 19:43:00 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:14.154 19:43:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:14.154 19:43:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:14.154 19:43:00 -- common/autotest_common.sh@10 -- # set +x 00:22:14.154 ************************************ 00:22:14.154 START TEST nvmf_discovery 00:22:14.154 ************************************ 00:22:14.154 19:43:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:14.154 * Looking for test storage... 00:22:14.154 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:14.154 19:43:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:14.154 19:43:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:14.154 19:43:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:14.154 19:43:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:14.154 19:43:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:14.154 19:43:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:14.154 19:43:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:14.154 19:43:00 -- scripts/common.sh@335 -- # IFS=.-: 00:22:14.154 19:43:00 -- scripts/common.sh@335 -- # read -ra ver1 00:22:14.154 19:43:00 -- scripts/common.sh@336 -- # IFS=.-: 00:22:14.154 19:43:00 -- scripts/common.sh@336 -- # read -ra ver2 00:22:14.154 19:43:00 -- scripts/common.sh@337 -- # local 'op=<' 00:22:14.154 19:43:00 -- scripts/common.sh@339 -- # ver1_l=2 00:22:14.154 19:43:00 -- scripts/common.sh@340 -- # ver2_l=1 00:22:14.154 19:43:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:14.154 19:43:00 -- scripts/common.sh@343 -- # case "$op" in 00:22:14.154 19:43:00 -- scripts/common.sh@344 -- # : 1 00:22:14.154 19:43:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:14.154 19:43:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:14.154 19:43:00 -- scripts/common.sh@364 -- # decimal 1 00:22:14.154 19:43:00 -- scripts/common.sh@352 -- # local d=1 00:22:14.154 19:43:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:14.154 19:43:00 -- scripts/common.sh@354 -- # echo 1 00:22:14.154 19:43:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:14.154 19:43:00 -- scripts/common.sh@365 -- # decimal 2 00:22:14.154 19:43:01 -- scripts/common.sh@352 -- # local d=2 00:22:14.154 19:43:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:14.154 19:43:01 -- scripts/common.sh@354 -- # echo 2 00:22:14.154 19:43:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:14.154 19:43:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:14.154 19:43:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:14.154 19:43:01 -- scripts/common.sh@367 -- # return 0 00:22:14.154 19:43:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:14.154 19:43:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:14.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:14.154 --rc genhtml_branch_coverage=1 00:22:14.154 --rc genhtml_function_coverage=1 00:22:14.154 --rc genhtml_legend=1 00:22:14.154 --rc geninfo_all_blocks=1 00:22:14.154 --rc geninfo_unexecuted_blocks=1 00:22:14.154 00:22:14.154 ' 00:22:14.154 19:43:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:14.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:14.154 --rc genhtml_branch_coverage=1 00:22:14.154 --rc genhtml_function_coverage=1 00:22:14.154 --rc genhtml_legend=1 00:22:14.154 --rc geninfo_all_blocks=1 00:22:14.154 --rc geninfo_unexecuted_blocks=1 00:22:14.154 00:22:14.154 ' 00:22:14.154 19:43:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:14.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:14.154 --rc genhtml_branch_coverage=1 00:22:14.154 --rc genhtml_function_coverage=1 00:22:14.154 --rc genhtml_legend=1 00:22:14.154 --rc geninfo_all_blocks=1 00:22:14.154 --rc geninfo_unexecuted_blocks=1 00:22:14.154 00:22:14.154 ' 00:22:14.154 19:43:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:14.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:14.154 --rc genhtml_branch_coverage=1 00:22:14.154 --rc genhtml_function_coverage=1 00:22:14.154 --rc genhtml_legend=1 00:22:14.154 --rc geninfo_all_blocks=1 00:22:14.154 --rc geninfo_unexecuted_blocks=1 00:22:14.154 00:22:14.154 ' 00:22:14.154 19:43:01 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:14.154 19:43:01 -- nvmf/common.sh@7 -- # uname -s 00:22:14.154 19:43:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:14.154 19:43:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:14.154 19:43:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:14.154 19:43:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:14.154 19:43:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:14.154 19:43:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:14.154 19:43:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:14.154 19:43:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:14.154 19:43:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:14.154 19:43:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:14.154 19:43:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:22:14.154 19:43:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:22:14.154 19:43:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:14.154 19:43:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:14.154 19:43:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:14.154 19:43:01 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:14.154 19:43:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:14.154 19:43:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:14.154 19:43:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:14.154 19:43:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.154 19:43:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.154 19:43:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.154 19:43:01 -- paths/export.sh@5 -- # export PATH 00:22:14.154 19:43:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.154 19:43:01 -- nvmf/common.sh@46 -- # : 0 00:22:14.154 19:43:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:14.154 19:43:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:14.154 19:43:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:14.154 19:43:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:14.154 19:43:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:14.154 19:43:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:14.154 19:43:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:14.154 19:43:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:14.154 19:43:01 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:14.154 19:43:01 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:14.154 19:43:01 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:14.154 19:43:01 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:14.154 19:43:01 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:14.154 19:43:01 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:14.154 19:43:01 -- host/discovery.sh@25 -- # nvmftestinit 00:22:14.154 19:43:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:14.154 19:43:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:14.154 19:43:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:14.154 19:43:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:14.154 19:43:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:14.154 19:43:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.154 19:43:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:14.154 19:43:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.154 19:43:01 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:14.154 19:43:01 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:14.154 19:43:01 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:14.154 19:43:01 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:14.154 19:43:01 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:14.154 19:43:01 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:14.154 19:43:01 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:14.154 19:43:01 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:14.154 19:43:01 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:14.154 19:43:01 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:14.154 19:43:01 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:14.154 19:43:01 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:14.154 19:43:01 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:14.154 19:43:01 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:14.154 19:43:01 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:14.154 19:43:01 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:14.154 19:43:01 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:14.154 19:43:01 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:14.154 19:43:01 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:14.413 19:43:01 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:14.413 Cannot find device "nvmf_tgt_br" 00:22:14.413 19:43:01 -- nvmf/common.sh@154 -- # true 00:22:14.413 19:43:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:14.413 Cannot find device "nvmf_tgt_br2" 00:22:14.413 19:43:01 -- nvmf/common.sh@155 -- # true 00:22:14.413 19:43:01 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:14.413 19:43:01 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:14.413 Cannot find device "nvmf_tgt_br" 00:22:14.413 19:43:01 -- nvmf/common.sh@157 -- # true 00:22:14.413 19:43:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:14.413 Cannot find device "nvmf_tgt_br2" 00:22:14.413 19:43:01 -- nvmf/common.sh@158 -- # true 00:22:14.413 19:43:01 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:14.413 19:43:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:14.413 19:43:01 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:14.413 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:14.413 19:43:01 -- nvmf/common.sh@161 -- # true 00:22:14.413 19:43:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:14.413 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:14.413 19:43:01 -- nvmf/common.sh@162 -- # true 00:22:14.413 19:43:01 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:14.413 19:43:01 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:14.413 19:43:01 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:14.413 19:43:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:14.413 19:43:01 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:14.413 19:43:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:14.413 19:43:01 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:14.413 19:43:01 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:14.413 19:43:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:14.413 19:43:01 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:14.413 19:43:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:14.413 19:43:01 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:14.413 19:43:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:14.413 19:43:01 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:14.413 19:43:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:14.413 19:43:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:14.413 19:43:01 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:14.413 19:43:01 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:14.672 19:43:01 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:14.672 19:43:01 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:14.672 19:43:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:14.672 19:43:01 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:14.672 19:43:01 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:14.672 19:43:01 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:14.672 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:14.672 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:22:14.672 00:22:14.672 --- 10.0.0.2 ping statistics --- 00:22:14.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.672 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:22:14.672 19:43:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:14.672 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:14.672 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:22:14.672 00:22:14.672 --- 10.0.0.3 ping statistics --- 00:22:14.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.672 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:22:14.672 19:43:01 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:14.672 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:14.672 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:22:14.672 00:22:14.672 --- 10.0.0.1 ping statistics --- 00:22:14.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.672 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:22:14.672 19:43:01 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:14.672 19:43:01 -- nvmf/common.sh@421 -- # return 0 00:22:14.672 19:43:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:14.672 19:43:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:14.672 19:43:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:14.672 19:43:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:14.672 19:43:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:14.672 19:43:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:14.672 19:43:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:14.672 19:43:01 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:14.672 19:43:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:14.672 19:43:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:14.672 19:43:01 -- common/autotest_common.sh@10 -- # set +x 00:22:14.672 19:43:01 -- nvmf/common.sh@469 -- # nvmfpid=96151 00:22:14.672 19:43:01 -- nvmf/common.sh@470 -- # waitforlisten 96151 00:22:14.672 19:43:01 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:14.672 19:43:01 -- common/autotest_common.sh@829 -- # '[' -z 96151 ']' 00:22:14.672 19:43:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:14.672 19:43:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:14.672 19:43:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:14.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:14.672 19:43:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:14.672 19:43:01 -- common/autotest_common.sh@10 -- # set +x 00:22:14.672 [2024-12-15 19:43:01.464013] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:14.672 [2024-12-15 19:43:01.464125] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:14.931 [2024-12-15 19:43:01.600387] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.931 [2024-12-15 19:43:01.691776] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:14.931 [2024-12-15 19:43:01.692025] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:14.931 [2024-12-15 19:43:01.692040] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:14.931 [2024-12-15 19:43:01.692050] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:14.931 [2024-12-15 19:43:01.692086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.867 19:43:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:15.867 19:43:02 -- common/autotest_common.sh@862 -- # return 0 00:22:15.867 19:43:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:15.867 19:43:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:15.867 19:43:02 -- common/autotest_common.sh@10 -- # set +x 00:22:15.867 19:43:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:15.867 19:43:02 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:15.867 19:43:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.867 19:43:02 -- common/autotest_common.sh@10 -- # set +x 00:22:15.867 [2024-12-15 19:43:02.495073] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:15.867 19:43:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.867 19:43:02 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:15.867 19:43:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.867 19:43:02 -- common/autotest_common.sh@10 -- # set +x 00:22:15.867 [2024-12-15 19:43:02.507228] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:15.867 19:43:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.867 19:43:02 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:15.867 19:43:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.867 19:43:02 -- common/autotest_common.sh@10 -- # set +x 00:22:15.867 null0 00:22:15.867 19:43:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.867 19:43:02 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:15.867 19:43:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.867 19:43:02 -- common/autotest_common.sh@10 -- # set +x 00:22:15.867 null1 00:22:15.867 19:43:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.867 19:43:02 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:15.867 19:43:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.867 19:43:02 -- common/autotest_common.sh@10 -- # set +x 00:22:15.867 19:43:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.867 19:43:02 -- host/discovery.sh@45 -- # hostpid=96201 00:22:15.867 19:43:02 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:15.867 19:43:02 -- host/discovery.sh@46 -- # waitforlisten 96201 /tmp/host.sock 00:22:15.867 19:43:02 -- common/autotest_common.sh@829 -- # '[' -z 96201 ']' 00:22:15.867 19:43:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:15.867 19:43:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:15.867 19:43:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:15.867 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:15.867 19:43:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:15.867 19:43:02 -- common/autotest_common.sh@10 -- # set +x 00:22:15.867 [2024-12-15 19:43:02.597084] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:15.867 [2024-12-15 19:43:02.597249] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96201 ] 00:22:15.867 [2024-12-15 19:43:02.735368] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.126 [2024-12-15 19:43:02.825199] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:16.126 [2024-12-15 19:43:02.825412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.093 19:43:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:17.093 19:43:03 -- common/autotest_common.sh@862 -- # return 0 00:22:17.093 19:43:03 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:17.093 19:43:03 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:17.093 19:43:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.093 19:43:03 -- common/autotest_common.sh@10 -- # set +x 00:22:17.093 19:43:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.093 19:43:03 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:17.093 19:43:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.093 19:43:03 -- common/autotest_common.sh@10 -- # set +x 00:22:17.093 19:43:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.093 19:43:03 -- host/discovery.sh@72 -- # notify_id=0 00:22:17.093 19:43:03 -- host/discovery.sh@78 -- # get_subsystem_names 00:22:17.093 19:43:03 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:17.093 19:43:03 -- host/discovery.sh@59 -- # sort 00:22:17.093 19:43:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.093 19:43:03 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:17.093 19:43:03 -- common/autotest_common.sh@10 -- # set +x 00:22:17.093 19:43:03 -- host/discovery.sh@59 -- # xargs 00:22:17.093 19:43:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.093 19:43:03 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:22:17.093 19:43:03 -- host/discovery.sh@79 -- # get_bdev_list 00:22:17.093 19:43:03 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:17.093 19:43:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.093 19:43:03 -- common/autotest_common.sh@10 -- # set +x 00:22:17.093 19:43:03 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:17.093 19:43:03 -- host/discovery.sh@55 -- # sort 00:22:17.093 19:43:03 -- host/discovery.sh@55 -- # xargs 00:22:17.093 19:43:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.093 19:43:03 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:22:17.093 19:43:03 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:17.093 19:43:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.093 19:43:03 -- common/autotest_common.sh@10 -- # set +x 00:22:17.093 19:43:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.093 19:43:03 -- host/discovery.sh@82 -- # get_subsystem_names 00:22:17.093 19:43:03 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:17.093 19:43:03 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:17.093 19:43:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.093 19:43:03 -- common/autotest_common.sh@10 -- # set +x 00:22:17.093 19:43:03 -- host/discovery.sh@59 -- # sort 00:22:17.093 19:43:03 -- host/discovery.sh@59 -- # xargs 00:22:17.093 19:43:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.093 19:43:03 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:22:17.093 19:43:03 -- host/discovery.sh@83 -- # get_bdev_list 00:22:17.093 19:43:03 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:17.093 19:43:03 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:17.093 19:43:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.093 19:43:03 -- common/autotest_common.sh@10 -- # set +x 00:22:17.093 19:43:03 -- host/discovery.sh@55 -- # xargs 00:22:17.093 19:43:03 -- host/discovery.sh@55 -- # sort 00:22:17.093 19:43:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.093 19:43:03 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:17.093 19:43:03 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:17.093 19:43:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.093 19:43:03 -- common/autotest_common.sh@10 -- # set +x 00:22:17.093 19:43:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.093 19:43:03 -- host/discovery.sh@86 -- # get_subsystem_names 00:22:17.093 19:43:03 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:17.093 19:43:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.093 19:43:03 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:17.093 19:43:03 -- common/autotest_common.sh@10 -- # set +x 00:22:17.093 19:43:03 -- host/discovery.sh@59 -- # xargs 00:22:17.093 19:43:03 -- host/discovery.sh@59 -- # sort 00:22:17.093 19:43:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.093 19:43:03 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:22:17.093 19:43:03 -- host/discovery.sh@87 -- # get_bdev_list 00:22:17.093 19:43:03 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:17.093 19:43:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.093 19:43:03 -- common/autotest_common.sh@10 -- # set +x 00:22:17.093 19:43:03 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:17.093 19:43:03 -- host/discovery.sh@55 -- # sort 00:22:17.093 19:43:03 -- host/discovery.sh@55 -- # xargs 00:22:17.093 19:43:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.353 19:43:04 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:17.353 19:43:04 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:17.353 19:43:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.353 19:43:04 -- common/autotest_common.sh@10 -- # set +x 00:22:17.353 [2024-12-15 19:43:04.009650] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:17.353 19:43:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.353 19:43:04 -- host/discovery.sh@92 -- # get_subsystem_names 00:22:17.353 19:43:04 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:17.353 19:43:04 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:17.353 19:43:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.353 19:43:04 -- common/autotest_common.sh@10 -- # set +x 00:22:17.353 19:43:04 -- host/discovery.sh@59 -- # sort 00:22:17.353 19:43:04 -- host/discovery.sh@59 -- # xargs 00:22:17.353 19:43:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.353 19:43:04 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:17.353 19:43:04 -- host/discovery.sh@93 -- # get_bdev_list 00:22:17.353 19:43:04 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:17.353 19:43:04 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:17.353 19:43:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.353 19:43:04 -- common/autotest_common.sh@10 -- # set +x 00:22:17.353 19:43:04 -- host/discovery.sh@55 -- # sort 00:22:17.353 19:43:04 -- host/discovery.sh@55 -- # xargs 00:22:17.353 19:43:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.353 19:43:04 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:22:17.353 19:43:04 -- host/discovery.sh@94 -- # get_notification_count 00:22:17.353 19:43:04 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:17.353 19:43:04 -- host/discovery.sh@74 -- # jq '. | length' 00:22:17.353 19:43:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.353 19:43:04 -- common/autotest_common.sh@10 -- # set +x 00:22:17.353 19:43:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.353 19:43:04 -- host/discovery.sh@74 -- # notification_count=0 00:22:17.353 19:43:04 -- host/discovery.sh@75 -- # notify_id=0 00:22:17.353 19:43:04 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:22:17.353 19:43:04 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:17.353 19:43:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.353 19:43:04 -- common/autotest_common.sh@10 -- # set +x 00:22:17.353 19:43:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.353 19:43:04 -- host/discovery.sh@100 -- # sleep 1 00:22:17.922 [2024-12-15 19:43:04.652290] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:17.922 [2024-12-15 19:43:04.652366] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:17.922 [2024-12-15 19:43:04.652387] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:17.922 [2024-12-15 19:43:04.738549] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:17.922 [2024-12-15 19:43:04.794945] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:17.922 [2024-12-15 19:43:04.794975] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:18.490 19:43:05 -- host/discovery.sh@101 -- # get_subsystem_names 00:22:18.490 19:43:05 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:18.490 19:43:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.490 19:43:05 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:18.490 19:43:05 -- common/autotest_common.sh@10 -- # set +x 00:22:18.490 19:43:05 -- host/discovery.sh@59 -- # sort 00:22:18.490 19:43:05 -- host/discovery.sh@59 -- # xargs 00:22:18.490 19:43:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.490 19:43:05 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.490 19:43:05 -- host/discovery.sh@102 -- # get_bdev_list 00:22:18.490 19:43:05 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:18.490 19:43:05 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:18.490 19:43:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.490 19:43:05 -- common/autotest_common.sh@10 -- # set +x 00:22:18.490 19:43:05 -- host/discovery.sh@55 -- # sort 00:22:18.490 19:43:05 -- host/discovery.sh@55 -- # xargs 00:22:18.490 19:43:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.490 19:43:05 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:18.490 19:43:05 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:22:18.490 19:43:05 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:18.490 19:43:05 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:18.490 19:43:05 -- host/discovery.sh@63 -- # sort -n 00:22:18.490 19:43:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.490 19:43:05 -- common/autotest_common.sh@10 -- # set +x 00:22:18.490 19:43:05 -- host/discovery.sh@63 -- # xargs 00:22:18.490 19:43:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.490 19:43:05 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:22:18.490 19:43:05 -- host/discovery.sh@104 -- # get_notification_count 00:22:18.490 19:43:05 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:18.490 19:43:05 -- host/discovery.sh@74 -- # jq '. | length' 00:22:18.490 19:43:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.490 19:43:05 -- common/autotest_common.sh@10 -- # set +x 00:22:18.490 19:43:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.748 19:43:05 -- host/discovery.sh@74 -- # notification_count=1 00:22:18.748 19:43:05 -- host/discovery.sh@75 -- # notify_id=1 00:22:18.748 19:43:05 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:22:18.748 19:43:05 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:18.748 19:43:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.748 19:43:05 -- common/autotest_common.sh@10 -- # set +x 00:22:18.748 19:43:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.748 19:43:05 -- host/discovery.sh@109 -- # sleep 1 00:22:19.686 19:43:06 -- host/discovery.sh@110 -- # get_bdev_list 00:22:19.686 19:43:06 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:19.686 19:43:06 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:19.686 19:43:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.686 19:43:06 -- common/autotest_common.sh@10 -- # set +x 00:22:19.686 19:43:06 -- host/discovery.sh@55 -- # sort 00:22:19.686 19:43:06 -- host/discovery.sh@55 -- # xargs 00:22:19.686 19:43:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.686 19:43:06 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:19.686 19:43:06 -- host/discovery.sh@111 -- # get_notification_count 00:22:19.686 19:43:06 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:19.686 19:43:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.686 19:43:06 -- host/discovery.sh@74 -- # jq '. | length' 00:22:19.686 19:43:06 -- common/autotest_common.sh@10 -- # set +x 00:22:19.686 19:43:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.686 19:43:06 -- host/discovery.sh@74 -- # notification_count=1 00:22:19.686 19:43:06 -- host/discovery.sh@75 -- # notify_id=2 00:22:19.686 19:43:06 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:22:19.686 19:43:06 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:19.686 19:43:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.686 19:43:06 -- common/autotest_common.sh@10 -- # set +x 00:22:19.686 [2024-12-15 19:43:06.539297] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:19.686 [2024-12-15 19:43:06.539795] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:19.686 [2024-12-15 19:43:06.539844] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:19.686 19:43:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.686 19:43:06 -- host/discovery.sh@117 -- # sleep 1 00:22:19.944 [2024-12-15 19:43:06.625820] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:19.944 [2024-12-15 19:43:06.685101] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:19.944 [2024-12-15 19:43:06.685295] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:19.944 [2024-12-15 19:43:06.685308] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:20.882 19:43:07 -- host/discovery.sh@118 -- # get_subsystem_names 00:22:20.882 19:43:07 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:20.882 19:43:07 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:20.882 19:43:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.882 19:43:07 -- common/autotest_common.sh@10 -- # set +x 00:22:20.882 19:43:07 -- host/discovery.sh@59 -- # sort 00:22:20.882 19:43:07 -- host/discovery.sh@59 -- # xargs 00:22:20.882 19:43:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.882 19:43:07 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.882 19:43:07 -- host/discovery.sh@119 -- # get_bdev_list 00:22:20.882 19:43:07 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:20.882 19:43:07 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:20.882 19:43:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.882 19:43:07 -- host/discovery.sh@55 -- # sort 00:22:20.882 19:43:07 -- common/autotest_common.sh@10 -- # set +x 00:22:20.882 19:43:07 -- host/discovery.sh@55 -- # xargs 00:22:20.882 19:43:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.882 19:43:07 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:20.882 19:43:07 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:22:20.882 19:43:07 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:20.882 19:43:07 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:20.882 19:43:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.882 19:43:07 -- host/discovery.sh@63 -- # sort -n 00:22:20.882 19:43:07 -- host/discovery.sh@63 -- # xargs 00:22:20.882 19:43:07 -- common/autotest_common.sh@10 -- # set +x 00:22:20.882 19:43:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.882 19:43:07 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:20.882 19:43:07 -- host/discovery.sh@121 -- # get_notification_count 00:22:20.882 19:43:07 -- host/discovery.sh@74 -- # jq '. | length' 00:22:20.882 19:43:07 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:20.882 19:43:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.882 19:43:07 -- common/autotest_common.sh@10 -- # set +x 00:22:20.882 19:43:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.882 19:43:07 -- host/discovery.sh@74 -- # notification_count=0 00:22:20.882 19:43:07 -- host/discovery.sh@75 -- # notify_id=2 00:22:20.882 19:43:07 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:22:20.882 19:43:07 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:20.882 19:43:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.882 19:43:07 -- common/autotest_common.sh@10 -- # set +x 00:22:20.882 [2024-12-15 19:43:07.772320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.882 [2024-12-15 19:43:07.772565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.882 [2024-12-15 19:43:07.772601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.882 [2024-12-15 19:43:07.772611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.882 [2024-12-15 19:43:07.772621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.882 [2024-12-15 19:43:07.772630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.882 [2024-12-15 19:43:07.772640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.882 [2024-12-15 19:43:07.772649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.882 [2024-12-15 19:43:07.772659] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d88cf0 is same with the state(5) to be set 00:22:20.882 [2024-12-15 19:43:07.772789] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:20.882 [2024-12-15 19:43:07.772843] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:21.144 19:43:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.144 19:43:07 -- host/discovery.sh@127 -- # sleep 1 00:22:21.144 [2024-12-15 19:43:07.782272] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d88cf0 (9): Bad file descriptor 00:22:21.144 [2024-12-15 19:43:07.792291] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:21.144 [2024-12-15 19:43:07.792452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.144 [2024-12-15 19:43:07.792502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.144 [2024-12-15 19:43:07.792517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d88cf0 with addr=10.0.0.2, port=4420 00:22:21.144 [2024-12-15 19:43:07.792528] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d88cf0 is same with the state(5) to be set 00:22:21.144 [2024-12-15 19:43:07.792544] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d88cf0 (9): Bad file descriptor 00:22:21.144 [2024-12-15 19:43:07.792558] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:21.144 [2024-12-15 19:43:07.792567] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:21.144 [2024-12-15 19:43:07.792577] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:21.144 [2024-12-15 19:43:07.792591] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:21.144 [2024-12-15 19:43:07.802364] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:21.144 [2024-12-15 19:43:07.802627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.144 [2024-12-15 19:43:07.802694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.144 [2024-12-15 19:43:07.802711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d88cf0 with addr=10.0.0.2, port=4420 00:22:21.144 [2024-12-15 19:43:07.802738] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d88cf0 is same with the state(5) to be set 00:22:21.144 [2024-12-15 19:43:07.802756] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d88cf0 (9): Bad file descriptor 00:22:21.144 [2024-12-15 19:43:07.802771] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:21.144 [2024-12-15 19:43:07.802780] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:21.144 [2024-12-15 19:43:07.802801] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:21.144 [2024-12-15 19:43:07.802824] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:21.144 [2024-12-15 19:43:07.812581] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:21.144 [2024-12-15 19:43:07.812673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.144 [2024-12-15 19:43:07.812732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.144 [2024-12-15 19:43:07.812747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d88cf0 with addr=10.0.0.2, port=4420 00:22:21.144 [2024-12-15 19:43:07.812755] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d88cf0 is same with the state(5) to be set 00:22:21.144 [2024-12-15 19:43:07.812770] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d88cf0 (9): Bad file descriptor 00:22:21.144 [2024-12-15 19:43:07.812792] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:21.144 [2024-12-15 19:43:07.812802] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:21.144 [2024-12-15 19:43:07.812810] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:21.144 [2024-12-15 19:43:07.812823] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:21.144 [2024-12-15 19:43:07.822679] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:21.144 [2024-12-15 19:43:07.822778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.144 [2024-12-15 19:43:07.822822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.144 [2024-12-15 19:43:07.822862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d88cf0 with addr=10.0.0.2, port=4420 00:22:21.144 [2024-12-15 19:43:07.822873] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d88cf0 is same with the state(5) to be set 00:22:21.144 [2024-12-15 19:43:07.822887] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d88cf0 (9): Bad file descriptor 00:22:21.144 [2024-12-15 19:43:07.822910] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:21.144 [2024-12-15 19:43:07.822919] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:21.144 [2024-12-15 19:43:07.822928] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:21.144 [2024-12-15 19:43:07.822941] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:21.144 [2024-12-15 19:43:07.832749] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:21.144 [2024-12-15 19:43:07.832899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.144 [2024-12-15 19:43:07.832947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.144 [2024-12-15 19:43:07.832972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d88cf0 with addr=10.0.0.2, port=4420 00:22:21.144 [2024-12-15 19:43:07.832982] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d88cf0 is same with the state(5) to be set 00:22:21.144 [2024-12-15 19:43:07.833023] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d88cf0 (9): Bad file descriptor 00:22:21.144 [2024-12-15 19:43:07.833048] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:21.144 [2024-12-15 19:43:07.833058] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:21.144 [2024-12-15 19:43:07.833067] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:21.144 [2024-12-15 19:43:07.833082] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:21.144 [2024-12-15 19:43:07.842813] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:21.144 [2024-12-15 19:43:07.842929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.144 [2024-12-15 19:43:07.843014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.144 [2024-12-15 19:43:07.843030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d88cf0 with addr=10.0.0.2, port=4420 00:22:21.144 [2024-12-15 19:43:07.843039] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d88cf0 is same with the state(5) to be set 00:22:21.144 [2024-12-15 19:43:07.843060] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d88cf0 (9): Bad file descriptor 00:22:21.144 [2024-12-15 19:43:07.843083] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:21.144 [2024-12-15 19:43:07.843093] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:21.144 [2024-12-15 19:43:07.843102] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:21.144 [2024-12-15 19:43:07.843116] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:21.144 [2024-12-15 19:43:07.852928] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:21.144 [2024-12-15 19:43:07.853020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.144 [2024-12-15 19:43:07.853064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.145 [2024-12-15 19:43:07.853079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d88cf0 with addr=10.0.0.2, port=4420 00:22:21.145 [2024-12-15 19:43:07.853088] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d88cf0 is same with the state(5) to be set 00:22:21.145 [2024-12-15 19:43:07.853102] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d88cf0 (9): Bad file descriptor 00:22:21.145 [2024-12-15 19:43:07.853124] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:21.145 [2024-12-15 19:43:07.853134] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:21.145 [2024-12-15 19:43:07.853141] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:21.145 [2024-12-15 19:43:07.853154] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:21.145 [2024-12-15 19:43:07.859132] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:21.145 [2024-12-15 19:43:07.859159] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:22.083 19:43:08 -- host/discovery.sh@128 -- # get_subsystem_names 00:22:22.083 19:43:08 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:22.083 19:43:08 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:22.083 19:43:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.083 19:43:08 -- common/autotest_common.sh@10 -- # set +x 00:22:22.083 19:43:08 -- host/discovery.sh@59 -- # sort 00:22:22.083 19:43:08 -- host/discovery.sh@59 -- # xargs 00:22:22.083 19:43:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.083 19:43:08 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.083 19:43:08 -- host/discovery.sh@129 -- # get_bdev_list 00:22:22.083 19:43:08 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:22.083 19:43:08 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:22.083 19:43:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.083 19:43:08 -- host/discovery.sh@55 -- # sort 00:22:22.083 19:43:08 -- common/autotest_common.sh@10 -- # set +x 00:22:22.083 19:43:08 -- host/discovery.sh@55 -- # xargs 00:22:22.083 19:43:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.083 19:43:08 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:22.083 19:43:08 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:22:22.083 19:43:08 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:22.083 19:43:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.083 19:43:08 -- common/autotest_common.sh@10 -- # set +x 00:22:22.083 19:43:08 -- host/discovery.sh@63 -- # xargs 00:22:22.083 19:43:08 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:22.083 19:43:08 -- host/discovery.sh@63 -- # sort -n 00:22:22.083 19:43:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.083 19:43:08 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:22:22.083 19:43:08 -- host/discovery.sh@131 -- # get_notification_count 00:22:22.083 19:43:08 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:22.083 19:43:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.083 19:43:08 -- common/autotest_common.sh@10 -- # set +x 00:22:22.083 19:43:08 -- host/discovery.sh@74 -- # jq '. | length' 00:22:22.083 19:43:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.342 19:43:09 -- host/discovery.sh@74 -- # notification_count=0 00:22:22.342 19:43:09 -- host/discovery.sh@75 -- # notify_id=2 00:22:22.342 19:43:09 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:22:22.342 19:43:09 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:22.342 19:43:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.342 19:43:09 -- common/autotest_common.sh@10 -- # set +x 00:22:22.342 19:43:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.342 19:43:09 -- host/discovery.sh@135 -- # sleep 1 00:22:23.279 19:43:10 -- host/discovery.sh@136 -- # get_subsystem_names 00:22:23.279 19:43:10 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:23.279 19:43:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.279 19:43:10 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:23.279 19:43:10 -- common/autotest_common.sh@10 -- # set +x 00:22:23.279 19:43:10 -- host/discovery.sh@59 -- # sort 00:22:23.279 19:43:10 -- host/discovery.sh@59 -- # xargs 00:22:23.279 19:43:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.279 19:43:10 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:22:23.279 19:43:10 -- host/discovery.sh@137 -- # get_bdev_list 00:22:23.279 19:43:10 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:23.279 19:43:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.279 19:43:10 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:23.279 19:43:10 -- common/autotest_common.sh@10 -- # set +x 00:22:23.279 19:43:10 -- host/discovery.sh@55 -- # xargs 00:22:23.279 19:43:10 -- host/discovery.sh@55 -- # sort 00:22:23.280 19:43:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.280 19:43:10 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:22:23.280 19:43:10 -- host/discovery.sh@138 -- # get_notification_count 00:22:23.280 19:43:10 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:23.280 19:43:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.280 19:43:10 -- host/discovery.sh@74 -- # jq '. | length' 00:22:23.280 19:43:10 -- common/autotest_common.sh@10 -- # set +x 00:22:23.280 19:43:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.539 19:43:10 -- host/discovery.sh@74 -- # notification_count=2 00:22:23.539 19:43:10 -- host/discovery.sh@75 -- # notify_id=4 00:22:23.539 19:43:10 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:22:23.539 19:43:10 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:23.539 19:43:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.539 19:43:10 -- common/autotest_common.sh@10 -- # set +x 00:22:24.476 [2024-12-15 19:43:11.213648] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:24.476 [2024-12-15 19:43:11.213717] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:24.476 [2024-12-15 19:43:11.213733] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:24.476 [2024-12-15 19:43:11.299842] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:24.476 [2024-12-15 19:43:11.359582] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:24.476 [2024-12-15 19:43:11.359847] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:24.476 19:43:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.476 19:43:11 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:24.476 19:43:11 -- common/autotest_common.sh@650 -- # local es=0 00:22:24.476 19:43:11 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:24.476 19:43:11 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:24.476 19:43:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:24.476 19:43:11 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:24.476 19:43:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:24.476 19:43:11 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:24.476 19:43:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.476 19:43:11 -- common/autotest_common.sh@10 -- # set +x 00:22:24.735 2024/12/15 19:43:11 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:22:24.735 request: 00:22:24.735 { 00:22:24.735 "method": "bdev_nvme_start_discovery", 00:22:24.735 "params": { 00:22:24.735 "name": "nvme", 00:22:24.735 "trtype": "tcp", 00:22:24.735 "traddr": "10.0.0.2", 00:22:24.735 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:24.735 "adrfam": "ipv4", 00:22:24.735 "trsvcid": "8009", 00:22:24.735 "wait_for_attach": true 00:22:24.735 } 00:22:24.735 } 00:22:24.735 Got JSON-RPC error response 00:22:24.735 GoRPCClient: error on JSON-RPC call 00:22:24.735 19:43:11 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:24.735 19:43:11 -- common/autotest_common.sh@653 -- # es=1 00:22:24.735 19:43:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:24.735 19:43:11 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:24.735 19:43:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:24.735 19:43:11 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:22:24.735 19:43:11 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:24.735 19:43:11 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:24.735 19:43:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.735 19:43:11 -- common/autotest_common.sh@10 -- # set +x 00:22:24.735 19:43:11 -- host/discovery.sh@67 -- # sort 00:22:24.735 19:43:11 -- host/discovery.sh@67 -- # xargs 00:22:24.735 19:43:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.735 19:43:11 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:22:24.735 19:43:11 -- host/discovery.sh@147 -- # get_bdev_list 00:22:24.735 19:43:11 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:24.735 19:43:11 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:24.735 19:43:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.735 19:43:11 -- host/discovery.sh@55 -- # sort 00:22:24.735 19:43:11 -- common/autotest_common.sh@10 -- # set +x 00:22:24.735 19:43:11 -- host/discovery.sh@55 -- # xargs 00:22:24.735 19:43:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.735 19:43:11 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:24.735 19:43:11 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:24.735 19:43:11 -- common/autotest_common.sh@650 -- # local es=0 00:22:24.735 19:43:11 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:24.735 19:43:11 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:24.735 19:43:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:24.735 19:43:11 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:24.735 19:43:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:24.735 19:43:11 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:24.735 19:43:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.735 19:43:11 -- common/autotest_common.sh@10 -- # set +x 00:22:24.735 2024/12/15 19:43:11 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:22:24.735 request: 00:22:24.735 { 00:22:24.735 "method": "bdev_nvme_start_discovery", 00:22:24.735 "params": { 00:22:24.735 "name": "nvme_second", 00:22:24.735 "trtype": "tcp", 00:22:24.735 "traddr": "10.0.0.2", 00:22:24.736 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:24.736 "adrfam": "ipv4", 00:22:24.736 "trsvcid": "8009", 00:22:24.736 "wait_for_attach": true 00:22:24.736 } 00:22:24.736 } 00:22:24.736 Got JSON-RPC error response 00:22:24.736 GoRPCClient: error on JSON-RPC call 00:22:24.736 19:43:11 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:24.736 19:43:11 -- common/autotest_common.sh@653 -- # es=1 00:22:24.736 19:43:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:24.736 19:43:11 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:24.736 19:43:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:24.736 19:43:11 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:22:24.736 19:43:11 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:24.736 19:43:11 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:24.736 19:43:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.736 19:43:11 -- common/autotest_common.sh@10 -- # set +x 00:22:24.736 19:43:11 -- host/discovery.sh@67 -- # xargs 00:22:24.736 19:43:11 -- host/discovery.sh@67 -- # sort 00:22:24.736 19:43:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.736 19:43:11 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:22:24.736 19:43:11 -- host/discovery.sh@153 -- # get_bdev_list 00:22:24.736 19:43:11 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:24.736 19:43:11 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:24.736 19:43:11 -- host/discovery.sh@55 -- # sort 00:22:24.736 19:43:11 -- host/discovery.sh@55 -- # xargs 00:22:24.736 19:43:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.736 19:43:11 -- common/autotest_common.sh@10 -- # set +x 00:22:24.736 19:43:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.736 19:43:11 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:24.736 19:43:11 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:24.736 19:43:11 -- common/autotest_common.sh@650 -- # local es=0 00:22:24.736 19:43:11 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:24.736 19:43:11 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:24.994 19:43:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:24.994 19:43:11 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:24.994 19:43:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:24.994 19:43:11 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:24.994 19:43:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.994 19:43:11 -- common/autotest_common.sh@10 -- # set +x 00:22:25.930 [2024-12-15 19:43:12.637002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.930 [2024-12-15 19:43:12.637130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.930 [2024-12-15 19:43:12.637149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d88300 with addr=10.0.0.2, port=8010 00:22:25.930 [2024-12-15 19:43:12.637169] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:25.930 [2024-12-15 19:43:12.637179] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:25.930 [2024-12-15 19:43:12.637187] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:26.866 [2024-12-15 19:43:13.636953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.866 [2024-12-15 19:43:13.637056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.866 [2024-12-15 19:43:13.637074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d88300 with addr=10.0.0.2, port=8010 00:22:26.866 [2024-12-15 19:43:13.637087] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:26.866 [2024-12-15 19:43:13.637095] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:26.866 [2024-12-15 19:43:13.637104] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:27.801 [2024-12-15 19:43:14.636884] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:27.801 2024/12/15 19:43:14 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:22:27.801 request: 00:22:27.801 { 00:22:27.801 "method": "bdev_nvme_start_discovery", 00:22:27.801 "params": { 00:22:27.801 "name": "nvme_second", 00:22:27.801 "trtype": "tcp", 00:22:27.801 "traddr": "10.0.0.2", 00:22:27.801 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:27.801 "adrfam": "ipv4", 00:22:27.801 "trsvcid": "8010", 00:22:27.801 "attach_timeout_ms": 3000 00:22:27.801 } 00:22:27.801 } 00:22:27.801 Got JSON-RPC error response 00:22:27.801 GoRPCClient: error on JSON-RPC call 00:22:27.801 19:43:14 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:27.801 19:43:14 -- common/autotest_common.sh@653 -- # es=1 00:22:27.801 19:43:14 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:27.801 19:43:14 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:27.801 19:43:14 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:27.801 19:43:14 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:22:27.801 19:43:14 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:27.801 19:43:14 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:27.801 19:43:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.801 19:43:14 -- common/autotest_common.sh@10 -- # set +x 00:22:27.801 19:43:14 -- host/discovery.sh@67 -- # sort 00:22:27.801 19:43:14 -- host/discovery.sh@67 -- # xargs 00:22:27.801 19:43:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.060 19:43:14 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:22:28.060 19:43:14 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:22:28.060 19:43:14 -- host/discovery.sh@162 -- # kill 96201 00:22:28.060 19:43:14 -- host/discovery.sh@163 -- # nvmftestfini 00:22:28.060 19:43:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:28.060 19:43:14 -- nvmf/common.sh@116 -- # sync 00:22:28.060 19:43:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:28.060 19:43:14 -- nvmf/common.sh@119 -- # set +e 00:22:28.060 19:43:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:28.060 19:43:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:28.060 rmmod nvme_tcp 00:22:28.060 rmmod nvme_fabrics 00:22:28.060 rmmod nvme_keyring 00:22:28.060 19:43:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:28.060 19:43:14 -- nvmf/common.sh@123 -- # set -e 00:22:28.060 19:43:14 -- nvmf/common.sh@124 -- # return 0 00:22:28.060 19:43:14 -- nvmf/common.sh@477 -- # '[' -n 96151 ']' 00:22:28.060 19:43:14 -- nvmf/common.sh@478 -- # killprocess 96151 00:22:28.060 19:43:14 -- common/autotest_common.sh@936 -- # '[' -z 96151 ']' 00:22:28.060 19:43:14 -- common/autotest_common.sh@940 -- # kill -0 96151 00:22:28.060 19:43:14 -- common/autotest_common.sh@941 -- # uname 00:22:28.060 19:43:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:28.060 19:43:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96151 00:22:28.060 19:43:14 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:28.060 killing process with pid 96151 00:22:28.060 19:43:14 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:28.060 19:43:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96151' 00:22:28.060 19:43:14 -- common/autotest_common.sh@955 -- # kill 96151 00:22:28.060 19:43:14 -- common/autotest_common.sh@960 -- # wait 96151 00:22:28.321 19:43:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:28.322 19:43:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:28.322 19:43:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:28.322 19:43:15 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:28.322 19:43:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:28.322 19:43:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.322 19:43:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:28.322 19:43:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.322 19:43:15 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:28.322 00:22:28.322 real 0m14.358s 00:22:28.322 user 0m28.013s 00:22:28.322 sys 0m1.865s 00:22:28.322 19:43:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:28.322 19:43:15 -- common/autotest_common.sh@10 -- # set +x 00:22:28.322 ************************************ 00:22:28.322 END TEST nvmf_discovery 00:22:28.322 ************************************ 00:22:28.582 19:43:15 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:28.582 19:43:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:28.582 19:43:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:28.582 19:43:15 -- common/autotest_common.sh@10 -- # set +x 00:22:28.582 ************************************ 00:22:28.582 START TEST nvmf_discovery_remove_ifc 00:22:28.582 ************************************ 00:22:28.582 19:43:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:28.582 * Looking for test storage... 00:22:28.582 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:28.582 19:43:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:28.582 19:43:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:28.582 19:43:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:28.582 19:43:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:28.582 19:43:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:28.582 19:43:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:28.582 19:43:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:28.582 19:43:15 -- scripts/common.sh@335 -- # IFS=.-: 00:22:28.582 19:43:15 -- scripts/common.sh@335 -- # read -ra ver1 00:22:28.582 19:43:15 -- scripts/common.sh@336 -- # IFS=.-: 00:22:28.582 19:43:15 -- scripts/common.sh@336 -- # read -ra ver2 00:22:28.582 19:43:15 -- scripts/common.sh@337 -- # local 'op=<' 00:22:28.582 19:43:15 -- scripts/common.sh@339 -- # ver1_l=2 00:22:28.582 19:43:15 -- scripts/common.sh@340 -- # ver2_l=1 00:22:28.582 19:43:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:28.582 19:43:15 -- scripts/common.sh@343 -- # case "$op" in 00:22:28.582 19:43:15 -- scripts/common.sh@344 -- # : 1 00:22:28.582 19:43:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:28.582 19:43:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:28.582 19:43:15 -- scripts/common.sh@364 -- # decimal 1 00:22:28.582 19:43:15 -- scripts/common.sh@352 -- # local d=1 00:22:28.582 19:43:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:28.582 19:43:15 -- scripts/common.sh@354 -- # echo 1 00:22:28.582 19:43:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:28.582 19:43:15 -- scripts/common.sh@365 -- # decimal 2 00:22:28.582 19:43:15 -- scripts/common.sh@352 -- # local d=2 00:22:28.582 19:43:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:28.582 19:43:15 -- scripts/common.sh@354 -- # echo 2 00:22:28.582 19:43:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:28.582 19:43:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:28.582 19:43:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:28.582 19:43:15 -- scripts/common.sh@367 -- # return 0 00:22:28.582 19:43:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:28.582 19:43:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:28.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.582 --rc genhtml_branch_coverage=1 00:22:28.582 --rc genhtml_function_coverage=1 00:22:28.582 --rc genhtml_legend=1 00:22:28.582 --rc geninfo_all_blocks=1 00:22:28.582 --rc geninfo_unexecuted_blocks=1 00:22:28.582 00:22:28.582 ' 00:22:28.582 19:43:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:28.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.582 --rc genhtml_branch_coverage=1 00:22:28.582 --rc genhtml_function_coverage=1 00:22:28.582 --rc genhtml_legend=1 00:22:28.582 --rc geninfo_all_blocks=1 00:22:28.582 --rc geninfo_unexecuted_blocks=1 00:22:28.582 00:22:28.582 ' 00:22:28.582 19:43:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:28.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.582 --rc genhtml_branch_coverage=1 00:22:28.582 --rc genhtml_function_coverage=1 00:22:28.582 --rc genhtml_legend=1 00:22:28.582 --rc geninfo_all_blocks=1 00:22:28.582 --rc geninfo_unexecuted_blocks=1 00:22:28.582 00:22:28.582 ' 00:22:28.582 19:43:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:28.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.582 --rc genhtml_branch_coverage=1 00:22:28.582 --rc genhtml_function_coverage=1 00:22:28.582 --rc genhtml_legend=1 00:22:28.582 --rc geninfo_all_blocks=1 00:22:28.582 --rc geninfo_unexecuted_blocks=1 00:22:28.582 00:22:28.582 ' 00:22:28.582 19:43:15 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:28.582 19:43:15 -- nvmf/common.sh@7 -- # uname -s 00:22:28.582 19:43:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:28.582 19:43:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:28.582 19:43:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:28.582 19:43:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:28.582 19:43:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:28.582 19:43:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:28.582 19:43:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:28.582 19:43:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:28.582 19:43:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:28.582 19:43:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:28.582 19:43:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:22:28.582 19:43:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:22:28.582 19:43:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:28.582 19:43:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:28.582 19:43:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:28.582 19:43:15 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:28.582 19:43:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:28.582 19:43:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:28.582 19:43:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:28.582 19:43:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.582 19:43:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.582 19:43:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.582 19:43:15 -- paths/export.sh@5 -- # export PATH 00:22:28.582 19:43:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.582 19:43:15 -- nvmf/common.sh@46 -- # : 0 00:22:28.582 19:43:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:28.582 19:43:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:28.582 19:43:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:28.582 19:43:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:28.582 19:43:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:28.582 19:43:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:28.582 19:43:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:28.582 19:43:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:28.582 19:43:15 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:28.582 19:43:15 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:28.582 19:43:15 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:28.582 19:43:15 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:28.582 19:43:15 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:28.582 19:43:15 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:28.582 19:43:15 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:28.582 19:43:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:28.582 19:43:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:28.582 19:43:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:28.582 19:43:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:28.582 19:43:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:28.582 19:43:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.582 19:43:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:28.582 19:43:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.582 19:43:15 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:28.582 19:43:15 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:28.582 19:43:15 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:28.582 19:43:15 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:28.582 19:43:15 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:28.582 19:43:15 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:28.582 19:43:15 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:28.582 19:43:15 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:28.582 19:43:15 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:28.582 19:43:15 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:28.582 19:43:15 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:28.582 19:43:15 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:28.582 19:43:15 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:28.582 19:43:15 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:28.583 19:43:15 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:28.583 19:43:15 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:28.583 19:43:15 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:28.583 19:43:15 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:28.583 19:43:15 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:28.841 19:43:15 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:28.841 Cannot find device "nvmf_tgt_br" 00:22:28.841 19:43:15 -- nvmf/common.sh@154 -- # true 00:22:28.841 19:43:15 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:28.841 Cannot find device "nvmf_tgt_br2" 00:22:28.841 19:43:15 -- nvmf/common.sh@155 -- # true 00:22:28.841 19:43:15 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:28.841 19:43:15 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:28.841 Cannot find device "nvmf_tgt_br" 00:22:28.841 19:43:15 -- nvmf/common.sh@157 -- # true 00:22:28.841 19:43:15 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:28.841 Cannot find device "nvmf_tgt_br2" 00:22:28.841 19:43:15 -- nvmf/common.sh@158 -- # true 00:22:28.841 19:43:15 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:28.841 19:43:15 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:28.841 19:43:15 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:28.841 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:28.841 19:43:15 -- nvmf/common.sh@161 -- # true 00:22:28.841 19:43:15 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:28.841 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:28.841 19:43:15 -- nvmf/common.sh@162 -- # true 00:22:28.841 19:43:15 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:28.841 19:43:15 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:28.841 19:43:15 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:28.841 19:43:15 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:28.841 19:43:15 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:28.841 19:43:15 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:28.841 19:43:15 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:28.841 19:43:15 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:28.841 19:43:15 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:28.841 19:43:15 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:28.841 19:43:15 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:28.841 19:43:15 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:28.841 19:43:15 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:28.841 19:43:15 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:28.841 19:43:15 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:28.841 19:43:15 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:28.841 19:43:15 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:28.841 19:43:15 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:28.841 19:43:15 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:28.841 19:43:15 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:29.098 19:43:15 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:29.099 19:43:15 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:29.099 19:43:15 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:29.099 19:43:15 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:29.099 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:29.099 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:22:29.099 00:22:29.099 --- 10.0.0.2 ping statistics --- 00:22:29.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.099 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:22:29.099 19:43:15 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:29.099 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:29.099 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.030 ms 00:22:29.099 00:22:29.099 --- 10.0.0.3 ping statistics --- 00:22:29.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.099 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:22:29.099 19:43:15 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:29.099 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:29.099 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:22:29.099 00:22:29.099 --- 10.0.0.1 ping statistics --- 00:22:29.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.099 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:22:29.099 19:43:15 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:29.099 19:43:15 -- nvmf/common.sh@421 -- # return 0 00:22:29.099 19:43:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:29.099 19:43:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:29.099 19:43:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:29.099 19:43:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:29.099 19:43:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:29.099 19:43:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:29.099 19:43:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:29.099 19:43:15 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:29.099 19:43:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:29.099 19:43:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:29.099 19:43:15 -- common/autotest_common.sh@10 -- # set +x 00:22:29.099 19:43:15 -- nvmf/common.sh@469 -- # nvmfpid=96714 00:22:29.099 19:43:15 -- nvmf/common.sh@470 -- # waitforlisten 96714 00:22:29.099 19:43:15 -- common/autotest_common.sh@829 -- # '[' -z 96714 ']' 00:22:29.099 19:43:15 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:29.099 19:43:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.099 19:43:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:29.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.099 19:43:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.099 19:43:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:29.099 19:43:15 -- common/autotest_common.sh@10 -- # set +x 00:22:29.099 [2024-12-15 19:43:15.864799] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:29.099 [2024-12-15 19:43:15.864896] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:29.357 [2024-12-15 19:43:15.998677] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.357 [2024-12-15 19:43:16.078722] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:29.357 [2024-12-15 19:43:16.078908] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:29.357 [2024-12-15 19:43:16.078922] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:29.357 [2024-12-15 19:43:16.078931] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:29.357 [2024-12-15 19:43:16.078957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:30.293 19:43:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:30.293 19:43:16 -- common/autotest_common.sh@862 -- # return 0 00:22:30.293 19:43:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:30.293 19:43:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:30.293 19:43:16 -- common/autotest_common.sh@10 -- # set +x 00:22:30.293 19:43:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:30.293 19:43:16 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:30.293 19:43:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.293 19:43:16 -- common/autotest_common.sh@10 -- # set +x 00:22:30.293 [2024-12-15 19:43:16.987496] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:30.293 [2024-12-15 19:43:16.995678] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:30.293 null0 00:22:30.293 [2024-12-15 19:43:17.027573] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:30.293 19:43:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.293 19:43:17 -- host/discovery_remove_ifc.sh@59 -- # hostpid=96770 00:22:30.293 19:43:17 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:30.293 19:43:17 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 96770 /tmp/host.sock 00:22:30.293 19:43:17 -- common/autotest_common.sh@829 -- # '[' -z 96770 ']' 00:22:30.293 19:43:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:30.293 19:43:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:30.293 19:43:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:30.293 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:30.293 19:43:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:30.293 19:43:17 -- common/autotest_common.sh@10 -- # set +x 00:22:30.293 [2024-12-15 19:43:17.107956] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:30.293 [2024-12-15 19:43:17.108058] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96770 ] 00:22:30.552 [2024-12-15 19:43:17.247876] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.552 [2024-12-15 19:43:17.346817] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:30.552 [2024-12-15 19:43:17.347012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.489 19:43:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:31.489 19:43:18 -- common/autotest_common.sh@862 -- # return 0 00:22:31.489 19:43:18 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:31.489 19:43:18 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:31.489 19:43:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.489 19:43:18 -- common/autotest_common.sh@10 -- # set +x 00:22:31.489 19:43:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.489 19:43:18 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:31.489 19:43:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.489 19:43:18 -- common/autotest_common.sh@10 -- # set +x 00:22:31.489 19:43:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.489 19:43:18 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:31.489 19:43:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.489 19:43:18 -- common/autotest_common.sh@10 -- # set +x 00:22:32.425 [2024-12-15 19:43:19.250896] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:32.425 [2024-12-15 19:43:19.250966] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:32.425 [2024-12-15 19:43:19.250987] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:32.684 [2024-12-15 19:43:19.337067] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:32.684 [2024-12-15 19:43:19.393443] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:32.684 [2024-12-15 19:43:19.393515] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:32.684 [2024-12-15 19:43:19.393546] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:32.684 [2024-12-15 19:43:19.393564] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:32.684 [2024-12-15 19:43:19.393592] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:32.684 19:43:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.684 19:43:19 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:32.684 19:43:19 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:32.684 [2024-12-15 19:43:19.399249] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x17f46c0 was disconnected and freed. delete nvme_qpair. 00:22:32.684 19:43:19 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:32.684 19:43:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.684 19:43:19 -- common/autotest_common.sh@10 -- # set +x 00:22:32.684 19:43:19 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:32.684 19:43:19 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:32.684 19:43:19 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:32.684 19:43:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.684 19:43:19 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:32.684 19:43:19 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:22:32.684 19:43:19 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:22:32.684 19:43:19 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:32.684 19:43:19 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:32.684 19:43:19 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:32.684 19:43:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.684 19:43:19 -- common/autotest_common.sh@10 -- # set +x 00:22:32.684 19:43:19 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:32.684 19:43:19 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:32.684 19:43:19 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:32.684 19:43:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.684 19:43:19 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:32.684 19:43:19 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:34.059 19:43:20 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:34.059 19:43:20 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:34.059 19:43:20 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:34.059 19:43:20 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:34.059 19:43:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.059 19:43:20 -- common/autotest_common.sh@10 -- # set +x 00:22:34.059 19:43:20 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:34.059 19:43:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.059 19:43:20 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:34.059 19:43:20 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:35.022 19:43:21 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:35.022 19:43:21 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:35.022 19:43:21 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:35.022 19:43:21 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:35.022 19:43:21 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:35.022 19:43:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.022 19:43:21 -- common/autotest_common.sh@10 -- # set +x 00:22:35.022 19:43:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.022 19:43:21 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:35.022 19:43:21 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:35.958 19:43:22 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:35.958 19:43:22 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:35.958 19:43:22 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:35.958 19:43:22 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:35.958 19:43:22 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:35.958 19:43:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.958 19:43:22 -- common/autotest_common.sh@10 -- # set +x 00:22:35.958 19:43:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.958 19:43:22 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:35.958 19:43:22 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:36.893 19:43:23 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:36.894 19:43:23 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:36.894 19:43:23 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:36.894 19:43:23 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:36.894 19:43:23 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:36.894 19:43:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.894 19:43:23 -- common/autotest_common.sh@10 -- # set +x 00:22:36.894 19:43:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.894 19:43:23 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:36.894 19:43:23 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:38.269 19:43:24 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:38.269 19:43:24 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:38.269 19:43:24 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:38.269 19:43:24 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:38.269 19:43:24 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:38.269 19:43:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.269 19:43:24 -- common/autotest_common.sh@10 -- # set +x 00:22:38.269 19:43:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.269 19:43:24 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:38.269 19:43:24 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:38.269 [2024-12-15 19:43:24.821177] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:38.269 [2024-12-15 19:43:24.821311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.269 [2024-12-15 19:43:24.821328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.269 [2024-12-15 19:43:24.821342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.269 [2024-12-15 19:43:24.821351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.269 [2024-12-15 19:43:24.821372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.269 [2024-12-15 19:43:24.821381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.269 [2024-12-15 19:43:24.821391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.269 [2024-12-15 19:43:24.821399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.269 [2024-12-15 19:43:24.821409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.269 [2024-12-15 19:43:24.821418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.269 [2024-12-15 19:43:24.821427] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d04b0 is same with the state(5) to be set 00:22:38.269 [2024-12-15 19:43:24.831198] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d04b0 (9): Bad file descriptor 00:22:38.269 [2024-12-15 19:43:24.841198] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:39.205 19:43:25 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:39.205 19:43:25 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:39.205 19:43:25 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:39.205 19:43:25 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:39.205 19:43:25 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:39.205 19:43:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.205 19:43:25 -- common/autotest_common.sh@10 -- # set +x 00:22:39.205 [2024-12-15 19:43:25.864982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:40.141 [2024-12-15 19:43:26.888975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:40.141 [2024-12-15 19:43:26.889136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17d04b0 with addr=10.0.0.2, port=4420 00:22:40.141 [2024-12-15 19:43:26.889176] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d04b0 is same with the state(5) to be set 00:22:40.141 [2024-12-15 19:43:26.889241] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:40.141 [2024-12-15 19:43:26.889266] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:40.141 [2024-12-15 19:43:26.889286] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:40.141 [2024-12-15 19:43:26.889315] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:22:40.141 [2024-12-15 19:43:26.890179] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d04b0 (9): Bad file descriptor 00:22:40.141 [2024-12-15 19:43:26.890260] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:40.141 [2024-12-15 19:43:26.890311] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:40.141 [2024-12-15 19:43:26.890406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.141 [2024-12-15 19:43:26.890439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.141 [2024-12-15 19:43:26.890475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.141 [2024-12-15 19:43:26.890495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.141 [2024-12-15 19:43:26.890517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.141 [2024-12-15 19:43:26.890538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.141 [2024-12-15 19:43:26.890560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.141 [2024-12-15 19:43:26.890580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.141 [2024-12-15 19:43:26.890602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.141 [2024-12-15 19:43:26.890621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.141 [2024-12-15 19:43:26.890642] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:40.141 [2024-12-15 19:43:26.890674] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17bb8f0 (9): Bad file descriptor 00:22:40.141 [2024-12-15 19:43:26.891291] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:40.141 [2024-12-15 19:43:26.891338] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:40.141 19:43:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.141 19:43:26 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:40.141 19:43:26 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:41.076 19:43:27 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:41.076 19:43:27 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:41.076 19:43:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.076 19:43:27 -- common/autotest_common.sh@10 -- # set +x 00:22:41.076 19:43:27 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:41.076 19:43:27 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:41.076 19:43:27 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:41.076 19:43:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.335 19:43:27 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:41.335 19:43:27 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:41.335 19:43:27 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:41.335 19:43:27 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:41.335 19:43:27 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:41.335 19:43:27 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:41.335 19:43:27 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:41.335 19:43:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.335 19:43:27 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:41.335 19:43:27 -- common/autotest_common.sh@10 -- # set +x 00:22:41.335 19:43:27 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:41.335 19:43:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.335 19:43:28 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:41.335 19:43:28 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:42.272 [2024-12-15 19:43:28.902427] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:42.272 [2024-12-15 19:43:28.902472] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:42.272 [2024-12-15 19:43:28.902491] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:42.272 [2024-12-15 19:43:28.988546] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:42.272 [2024-12-15 19:43:29.043885] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:42.272 [2024-12-15 19:43:29.043941] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:42.272 [2024-12-15 19:43:29.043966] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:42.272 [2024-12-15 19:43:29.043982] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:42.272 [2024-12-15 19:43:29.043992] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:42.272 [2024-12-15 19:43:29.050868] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x17ff330 was disconnected and freed. delete nvme_qpair. 00:22:42.272 19:43:29 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:42.272 19:43:29 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:42.272 19:43:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.272 19:43:29 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:42.272 19:43:29 -- common/autotest_common.sh@10 -- # set +x 00:22:42.272 19:43:29 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:42.272 19:43:29 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:42.272 19:43:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.272 19:43:29 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:42.272 19:43:29 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:42.272 19:43:29 -- host/discovery_remove_ifc.sh@90 -- # killprocess 96770 00:22:42.272 19:43:29 -- common/autotest_common.sh@936 -- # '[' -z 96770 ']' 00:22:42.272 19:43:29 -- common/autotest_common.sh@940 -- # kill -0 96770 00:22:42.272 19:43:29 -- common/autotest_common.sh@941 -- # uname 00:22:42.272 19:43:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:42.272 19:43:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96770 00:22:42.272 killing process with pid 96770 00:22:42.272 19:43:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:42.272 19:43:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:42.272 19:43:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96770' 00:22:42.272 19:43:29 -- common/autotest_common.sh@955 -- # kill 96770 00:22:42.272 19:43:29 -- common/autotest_common.sh@960 -- # wait 96770 00:22:42.531 19:43:29 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:42.531 19:43:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:42.531 19:43:29 -- nvmf/common.sh@116 -- # sync 00:22:42.531 19:43:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:42.531 19:43:29 -- nvmf/common.sh@119 -- # set +e 00:22:42.531 19:43:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:42.531 19:43:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:42.531 rmmod nvme_tcp 00:22:42.789 rmmod nvme_fabrics 00:22:42.789 rmmod nvme_keyring 00:22:42.789 19:43:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:42.789 19:43:29 -- nvmf/common.sh@123 -- # set -e 00:22:42.789 19:43:29 -- nvmf/common.sh@124 -- # return 0 00:22:42.789 19:43:29 -- nvmf/common.sh@477 -- # '[' -n 96714 ']' 00:22:42.789 19:43:29 -- nvmf/common.sh@478 -- # killprocess 96714 00:22:42.789 19:43:29 -- common/autotest_common.sh@936 -- # '[' -z 96714 ']' 00:22:42.789 19:43:29 -- common/autotest_common.sh@940 -- # kill -0 96714 00:22:42.789 19:43:29 -- common/autotest_common.sh@941 -- # uname 00:22:42.789 19:43:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:42.789 19:43:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96714 00:22:42.789 killing process with pid 96714 00:22:42.789 19:43:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:42.789 19:43:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:42.789 19:43:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96714' 00:22:42.789 19:43:29 -- common/autotest_common.sh@955 -- # kill 96714 00:22:42.789 19:43:29 -- common/autotest_common.sh@960 -- # wait 96714 00:22:43.048 19:43:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:43.048 19:43:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:43.048 19:43:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:43.048 19:43:29 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:43.048 19:43:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:43.048 19:43:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.048 19:43:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:43.048 19:43:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.048 19:43:29 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:43.048 ************************************ 00:22:43.048 END TEST nvmf_discovery_remove_ifc 00:22:43.048 ************************************ 00:22:43.048 00:22:43.048 real 0m14.588s 00:22:43.048 user 0m24.947s 00:22:43.048 sys 0m1.685s 00:22:43.048 19:43:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:43.048 19:43:29 -- common/autotest_common.sh@10 -- # set +x 00:22:43.048 19:43:29 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:22:43.048 19:43:29 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:43.048 19:43:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:43.048 19:43:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:43.048 19:43:29 -- common/autotest_common.sh@10 -- # set +x 00:22:43.048 ************************************ 00:22:43.048 START TEST nvmf_digest 00:22:43.048 ************************************ 00:22:43.048 19:43:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:43.306 * Looking for test storage... 00:22:43.306 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:43.306 19:43:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:43.306 19:43:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:43.306 19:43:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:43.306 19:43:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:43.306 19:43:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:43.306 19:43:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:43.306 19:43:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:43.306 19:43:30 -- scripts/common.sh@335 -- # IFS=.-: 00:22:43.306 19:43:30 -- scripts/common.sh@335 -- # read -ra ver1 00:22:43.306 19:43:30 -- scripts/common.sh@336 -- # IFS=.-: 00:22:43.306 19:43:30 -- scripts/common.sh@336 -- # read -ra ver2 00:22:43.306 19:43:30 -- scripts/common.sh@337 -- # local 'op=<' 00:22:43.306 19:43:30 -- scripts/common.sh@339 -- # ver1_l=2 00:22:43.306 19:43:30 -- scripts/common.sh@340 -- # ver2_l=1 00:22:43.306 19:43:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:43.307 19:43:30 -- scripts/common.sh@343 -- # case "$op" in 00:22:43.307 19:43:30 -- scripts/common.sh@344 -- # : 1 00:22:43.307 19:43:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:43.307 19:43:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:43.307 19:43:30 -- scripts/common.sh@364 -- # decimal 1 00:22:43.307 19:43:30 -- scripts/common.sh@352 -- # local d=1 00:22:43.307 19:43:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:43.307 19:43:30 -- scripts/common.sh@354 -- # echo 1 00:22:43.307 19:43:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:43.307 19:43:30 -- scripts/common.sh@365 -- # decimal 2 00:22:43.307 19:43:30 -- scripts/common.sh@352 -- # local d=2 00:22:43.307 19:43:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:43.307 19:43:30 -- scripts/common.sh@354 -- # echo 2 00:22:43.307 19:43:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:43.307 19:43:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:43.307 19:43:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:43.307 19:43:30 -- scripts/common.sh@367 -- # return 0 00:22:43.307 19:43:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:43.307 19:43:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:43.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.307 --rc genhtml_branch_coverage=1 00:22:43.307 --rc genhtml_function_coverage=1 00:22:43.307 --rc genhtml_legend=1 00:22:43.307 --rc geninfo_all_blocks=1 00:22:43.307 --rc geninfo_unexecuted_blocks=1 00:22:43.307 00:22:43.307 ' 00:22:43.307 19:43:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:43.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.307 --rc genhtml_branch_coverage=1 00:22:43.307 --rc genhtml_function_coverage=1 00:22:43.307 --rc genhtml_legend=1 00:22:43.307 --rc geninfo_all_blocks=1 00:22:43.307 --rc geninfo_unexecuted_blocks=1 00:22:43.307 00:22:43.307 ' 00:22:43.307 19:43:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:43.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.307 --rc genhtml_branch_coverage=1 00:22:43.307 --rc genhtml_function_coverage=1 00:22:43.307 --rc genhtml_legend=1 00:22:43.307 --rc geninfo_all_blocks=1 00:22:43.307 --rc geninfo_unexecuted_blocks=1 00:22:43.307 00:22:43.307 ' 00:22:43.307 19:43:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:43.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.307 --rc genhtml_branch_coverage=1 00:22:43.307 --rc genhtml_function_coverage=1 00:22:43.307 --rc genhtml_legend=1 00:22:43.307 --rc geninfo_all_blocks=1 00:22:43.307 --rc geninfo_unexecuted_blocks=1 00:22:43.307 00:22:43.307 ' 00:22:43.307 19:43:30 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:43.307 19:43:30 -- nvmf/common.sh@7 -- # uname -s 00:22:43.307 19:43:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:43.307 19:43:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:43.307 19:43:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:43.307 19:43:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:43.307 19:43:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:43.307 19:43:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:43.307 19:43:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:43.307 19:43:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:43.307 19:43:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:43.307 19:43:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:43.307 19:43:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:22:43.307 19:43:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:22:43.307 19:43:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:43.307 19:43:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:43.307 19:43:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:43.307 19:43:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:43.307 19:43:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:43.307 19:43:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:43.307 19:43:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:43.307 19:43:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.307 19:43:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.307 19:43:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.307 19:43:30 -- paths/export.sh@5 -- # export PATH 00:22:43.307 19:43:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.307 19:43:30 -- nvmf/common.sh@46 -- # : 0 00:22:43.307 19:43:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:43.307 19:43:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:43.307 19:43:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:43.307 19:43:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:43.307 19:43:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:43.307 19:43:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:43.307 19:43:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:43.307 19:43:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:43.307 19:43:30 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:22:43.307 19:43:30 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:22:43.307 19:43:30 -- host/digest.sh@16 -- # runtime=2 00:22:43.307 19:43:30 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:22:43.307 19:43:30 -- host/digest.sh@132 -- # nvmftestinit 00:22:43.307 19:43:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:43.307 19:43:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:43.307 19:43:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:43.307 19:43:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:43.307 19:43:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:43.307 19:43:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.307 19:43:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:43.307 19:43:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.307 19:43:30 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:43.307 19:43:30 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:43.307 19:43:30 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:43.307 19:43:30 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:43.307 19:43:30 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:43.307 19:43:30 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:43.307 19:43:30 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:43.307 19:43:30 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:43.307 19:43:30 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:43.307 19:43:30 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:43.307 19:43:30 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:43.307 19:43:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:43.307 19:43:30 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:43.307 19:43:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:43.307 19:43:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:43.307 19:43:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:43.307 19:43:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:43.307 19:43:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:43.307 19:43:30 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:43.307 19:43:30 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:43.307 Cannot find device "nvmf_tgt_br" 00:22:43.307 19:43:30 -- nvmf/common.sh@154 -- # true 00:22:43.307 19:43:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:43.307 Cannot find device "nvmf_tgt_br2" 00:22:43.307 19:43:30 -- nvmf/common.sh@155 -- # true 00:22:43.307 19:43:30 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:43.307 19:43:30 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:43.307 Cannot find device "nvmf_tgt_br" 00:22:43.307 19:43:30 -- nvmf/common.sh@157 -- # true 00:22:43.307 19:43:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:43.307 Cannot find device "nvmf_tgt_br2" 00:22:43.307 19:43:30 -- nvmf/common.sh@158 -- # true 00:22:43.307 19:43:30 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:43.566 19:43:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:43.566 19:43:30 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:43.566 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:43.566 19:43:30 -- nvmf/common.sh@161 -- # true 00:22:43.566 19:43:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:43.566 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:43.566 19:43:30 -- nvmf/common.sh@162 -- # true 00:22:43.566 19:43:30 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:43.566 19:43:30 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:43.566 19:43:30 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:43.566 19:43:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:43.566 19:43:30 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:43.566 19:43:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:43.566 19:43:30 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:43.566 19:43:30 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:43.566 19:43:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:43.566 19:43:30 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:43.566 19:43:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:43.566 19:43:30 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:43.566 19:43:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:43.566 19:43:30 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:43.566 19:43:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:43.566 19:43:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:43.566 19:43:30 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:43.566 19:43:30 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:43.566 19:43:30 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:43.566 19:43:30 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:43.566 19:43:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:43.566 19:43:30 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:43.566 19:43:30 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:43.566 19:43:30 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:43.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:43.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:22:43.566 00:22:43.566 --- 10.0.0.2 ping statistics --- 00:22:43.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.566 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:22:43.566 19:43:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:43.566 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:43.566 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:22:43.566 00:22:43.566 --- 10.0.0.3 ping statistics --- 00:22:43.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.566 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:22:43.566 19:43:30 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:43.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:43.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:22:43.566 00:22:43.566 --- 10.0.0.1 ping statistics --- 00:22:43.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.566 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:22:43.566 19:43:30 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:43.566 19:43:30 -- nvmf/common.sh@421 -- # return 0 00:22:43.566 19:43:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:43.566 19:43:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:43.566 19:43:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:43.566 19:43:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:43.566 19:43:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:43.566 19:43:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:43.566 19:43:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:43.567 19:43:30 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:43.567 19:43:30 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:22:43.567 19:43:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:43.567 19:43:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:43.567 19:43:30 -- common/autotest_common.sh@10 -- # set +x 00:22:43.825 ************************************ 00:22:43.825 START TEST nvmf_digest_clean 00:22:43.825 ************************************ 00:22:43.825 19:43:30 -- common/autotest_common.sh@1114 -- # run_digest 00:22:43.825 19:43:30 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:22:43.825 19:43:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:43.825 19:43:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:43.825 19:43:30 -- common/autotest_common.sh@10 -- # set +x 00:22:43.825 19:43:30 -- nvmf/common.sh@469 -- # nvmfpid=97193 00:22:43.825 19:43:30 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:43.825 19:43:30 -- nvmf/common.sh@470 -- # waitforlisten 97193 00:22:43.825 19:43:30 -- common/autotest_common.sh@829 -- # '[' -z 97193 ']' 00:22:43.825 19:43:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.825 19:43:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:43.825 19:43:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.825 19:43:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:43.825 19:43:30 -- common/autotest_common.sh@10 -- # set +x 00:22:43.825 [2024-12-15 19:43:30.519072] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:43.825 [2024-12-15 19:43:30.519283] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.825 [2024-12-15 19:43:30.657931] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.122 [2024-12-15 19:43:30.747367] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:44.122 [2024-12-15 19:43:30.747543] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:44.122 [2024-12-15 19:43:30.747560] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:44.122 [2024-12-15 19:43:30.747571] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:44.122 [2024-12-15 19:43:30.747601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.122 19:43:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:44.122 19:43:30 -- common/autotest_common.sh@862 -- # return 0 00:22:44.122 19:43:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:44.122 19:43:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:44.122 19:43:30 -- common/autotest_common.sh@10 -- # set +x 00:22:44.122 19:43:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:44.122 19:43:30 -- host/digest.sh@120 -- # common_target_config 00:22:44.122 19:43:30 -- host/digest.sh@43 -- # rpc_cmd 00:22:44.122 19:43:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.122 19:43:30 -- common/autotest_common.sh@10 -- # set +x 00:22:44.122 null0 00:22:44.122 [2024-12-15 19:43:30.941761] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:44.122 [2024-12-15 19:43:30.965997] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:44.122 19:43:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.122 19:43:30 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:22:44.122 19:43:30 -- host/digest.sh@77 -- # local rw bs qd 00:22:44.122 19:43:30 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:44.122 19:43:30 -- host/digest.sh@80 -- # rw=randread 00:22:44.122 19:43:30 -- host/digest.sh@80 -- # bs=4096 00:22:44.122 19:43:30 -- host/digest.sh@80 -- # qd=128 00:22:44.122 19:43:30 -- host/digest.sh@82 -- # bperfpid=97225 00:22:44.122 19:43:30 -- host/digest.sh@83 -- # waitforlisten 97225 /var/tmp/bperf.sock 00:22:44.122 19:43:30 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:44.122 19:43:30 -- common/autotest_common.sh@829 -- # '[' -z 97225 ']' 00:22:44.122 19:43:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:44.122 19:43:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:44.122 19:43:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:44.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:44.122 19:43:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:44.122 19:43:30 -- common/autotest_common.sh@10 -- # set +x 00:22:44.384 [2024-12-15 19:43:31.024981] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:44.384 [2024-12-15 19:43:31.025236] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97225 ] 00:22:44.384 [2024-12-15 19:43:31.160568] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.384 [2024-12-15 19:43:31.260198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.319 19:43:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:45.319 19:43:31 -- common/autotest_common.sh@862 -- # return 0 00:22:45.319 19:43:31 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:45.319 19:43:31 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:45.319 19:43:31 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:45.578 19:43:32 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:45.578 19:43:32 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:45.837 nvme0n1 00:22:45.837 19:43:32 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:45.837 19:43:32 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:46.095 Running I/O for 2 seconds... 00:22:47.999 00:22:47.999 Latency(us) 00:22:47.999 [2024-12-15T19:43:34.895Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.999 [2024-12-15T19:43:34.895Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:47.999 nvme0n1 : 2.01 23911.99 93.41 0.00 0.00 5348.49 2263.97 12392.26 00:22:47.999 [2024-12-15T19:43:34.895Z] =================================================================================================================== 00:22:47.999 [2024-12-15T19:43:34.895Z] Total : 23911.99 93.41 0.00 0.00 5348.49 2263.97 12392.26 00:22:47.999 0 00:22:47.999 19:43:34 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:47.999 19:43:34 -- host/digest.sh@92 -- # get_accel_stats 00:22:47.999 19:43:34 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:47.999 19:43:34 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:47.999 | select(.opcode=="crc32c") 00:22:47.999 | "\(.module_name) \(.executed)"' 00:22:47.999 19:43:34 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:48.258 19:43:35 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:48.258 19:43:35 -- host/digest.sh@93 -- # exp_module=software 00:22:48.258 19:43:35 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:48.258 19:43:35 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:48.258 19:43:35 -- host/digest.sh@97 -- # killprocess 97225 00:22:48.258 19:43:35 -- common/autotest_common.sh@936 -- # '[' -z 97225 ']' 00:22:48.258 19:43:35 -- common/autotest_common.sh@940 -- # kill -0 97225 00:22:48.258 19:43:35 -- common/autotest_common.sh@941 -- # uname 00:22:48.258 19:43:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:48.258 19:43:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97225 00:22:48.258 19:43:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:48.258 19:43:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:48.258 19:43:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97225' 00:22:48.258 killing process with pid 97225 00:22:48.258 Received shutdown signal, test time was about 2.000000 seconds 00:22:48.258 00:22:48.258 Latency(us) 00:22:48.258 [2024-12-15T19:43:35.154Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.258 [2024-12-15T19:43:35.154Z] =================================================================================================================== 00:22:48.258 [2024-12-15T19:43:35.154Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:48.258 19:43:35 -- common/autotest_common.sh@955 -- # kill 97225 00:22:48.258 19:43:35 -- common/autotest_common.sh@960 -- # wait 97225 00:22:48.516 19:43:35 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:22:48.516 19:43:35 -- host/digest.sh@77 -- # local rw bs qd 00:22:48.516 19:43:35 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:48.516 19:43:35 -- host/digest.sh@80 -- # rw=randread 00:22:48.516 19:43:35 -- host/digest.sh@80 -- # bs=131072 00:22:48.516 19:43:35 -- host/digest.sh@80 -- # qd=16 00:22:48.516 19:43:35 -- host/digest.sh@82 -- # bperfpid=97320 00:22:48.516 19:43:35 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:48.516 19:43:35 -- host/digest.sh@83 -- # waitforlisten 97320 /var/tmp/bperf.sock 00:22:48.516 19:43:35 -- common/autotest_common.sh@829 -- # '[' -z 97320 ']' 00:22:48.516 19:43:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:48.516 19:43:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:48.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:48.516 19:43:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:48.516 19:43:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:48.516 19:43:35 -- common/autotest_common.sh@10 -- # set +x 00:22:48.775 [2024-12-15 19:43:35.429165] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:48.775 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:48.775 Zero copy mechanism will not be used. 00:22:48.775 [2024-12-15 19:43:35.429339] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97320 ] 00:22:48.775 [2024-12-15 19:43:35.558422] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.775 [2024-12-15 19:43:35.630702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.711 19:43:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:49.711 19:43:36 -- common/autotest_common.sh@862 -- # return 0 00:22:49.711 19:43:36 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:49.711 19:43:36 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:49.711 19:43:36 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:49.970 19:43:36 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:49.970 19:43:36 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:50.228 nvme0n1 00:22:50.228 19:43:37 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:50.228 19:43:37 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:50.487 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:50.487 Zero copy mechanism will not be used. 00:22:50.487 Running I/O for 2 seconds... 00:22:52.393 00:22:52.393 Latency(us) 00:22:52.393 [2024-12-15T19:43:39.289Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.393 [2024-12-15T19:43:39.289Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:52.393 nvme0n1 : 2.00 9993.28 1249.16 0.00 0.00 1598.37 502.69 2844.86 00:22:52.393 [2024-12-15T19:43:39.289Z] =================================================================================================================== 00:22:52.393 [2024-12-15T19:43:39.289Z] Total : 9993.28 1249.16 0.00 0.00 1598.37 502.69 2844.86 00:22:52.393 0 00:22:52.393 19:43:39 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:52.393 19:43:39 -- host/digest.sh@92 -- # get_accel_stats 00:22:52.393 19:43:39 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:52.393 19:43:39 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:52.393 | select(.opcode=="crc32c") 00:22:52.393 | "\(.module_name) \(.executed)"' 00:22:52.393 19:43:39 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:52.652 19:43:39 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:52.652 19:43:39 -- host/digest.sh@93 -- # exp_module=software 00:22:52.652 19:43:39 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:52.652 19:43:39 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:52.652 19:43:39 -- host/digest.sh@97 -- # killprocess 97320 00:22:52.652 19:43:39 -- common/autotest_common.sh@936 -- # '[' -z 97320 ']' 00:22:52.652 19:43:39 -- common/autotest_common.sh@940 -- # kill -0 97320 00:22:52.652 19:43:39 -- common/autotest_common.sh@941 -- # uname 00:22:52.652 19:43:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:52.652 19:43:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97320 00:22:52.652 19:43:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:52.652 killing process with pid 97320 00:22:52.652 19:43:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:52.652 19:43:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97320' 00:22:52.652 Received shutdown signal, test time was about 2.000000 seconds 00:22:52.652 00:22:52.652 Latency(us) 00:22:52.652 [2024-12-15T19:43:39.548Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.652 [2024-12-15T19:43:39.548Z] =================================================================================================================== 00:22:52.652 [2024-12-15T19:43:39.548Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:52.652 19:43:39 -- common/autotest_common.sh@955 -- # kill 97320 00:22:52.652 19:43:39 -- common/autotest_common.sh@960 -- # wait 97320 00:22:52.911 19:43:39 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:22:52.911 19:43:39 -- host/digest.sh@77 -- # local rw bs qd 00:22:52.911 19:43:39 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:52.911 19:43:39 -- host/digest.sh@80 -- # rw=randwrite 00:22:52.911 19:43:39 -- host/digest.sh@80 -- # bs=4096 00:22:52.911 19:43:39 -- host/digest.sh@80 -- # qd=128 00:22:52.911 19:43:39 -- host/digest.sh@82 -- # bperfpid=97406 00:22:52.911 19:43:39 -- host/digest.sh@83 -- # waitforlisten 97406 /var/tmp/bperf.sock 00:22:52.911 19:43:39 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:52.911 19:43:39 -- common/autotest_common.sh@829 -- # '[' -z 97406 ']' 00:22:52.911 19:43:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:52.911 19:43:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:52.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:52.911 19:43:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:52.911 19:43:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:52.911 19:43:39 -- common/autotest_common.sh@10 -- # set +x 00:22:52.911 [2024-12-15 19:43:39.766039] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:52.911 [2024-12-15 19:43:39.766141] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97406 ] 00:22:53.170 [2024-12-15 19:43:39.901612] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.170 [2024-12-15 19:43:39.967599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:53.170 19:43:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:53.170 19:43:40 -- common/autotest_common.sh@862 -- # return 0 00:22:53.170 19:43:40 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:53.170 19:43:40 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:53.170 19:43:40 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:53.738 19:43:40 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:53.738 19:43:40 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:53.996 nvme0n1 00:22:53.996 19:43:40 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:53.996 19:43:40 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:53.996 Running I/O for 2 seconds... 00:22:56.529 00:22:56.529 Latency(us) 00:22:56.529 [2024-12-15T19:43:43.425Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.529 [2024-12-15T19:43:43.425Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:56.530 nvme0n1 : 2.00 29056.67 113.50 0.00 0.00 4400.70 2442.71 14954.12 00:22:56.530 [2024-12-15T19:43:43.426Z] =================================================================================================================== 00:22:56.530 [2024-12-15T19:43:43.426Z] Total : 29056.67 113.50 0.00 0.00 4400.70 2442.71 14954.12 00:22:56.530 0 00:22:56.530 19:43:42 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:56.530 19:43:42 -- host/digest.sh@92 -- # get_accel_stats 00:22:56.530 19:43:42 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:56.530 19:43:42 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:56.530 19:43:42 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:56.530 | select(.opcode=="crc32c") 00:22:56.530 | "\(.module_name) \(.executed)"' 00:22:56.530 19:43:43 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:56.530 19:43:43 -- host/digest.sh@93 -- # exp_module=software 00:22:56.530 19:43:43 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:56.530 19:43:43 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:56.530 19:43:43 -- host/digest.sh@97 -- # killprocess 97406 00:22:56.530 19:43:43 -- common/autotest_common.sh@936 -- # '[' -z 97406 ']' 00:22:56.530 19:43:43 -- common/autotest_common.sh@940 -- # kill -0 97406 00:22:56.530 19:43:43 -- common/autotest_common.sh@941 -- # uname 00:22:56.530 19:43:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:56.530 19:43:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97406 00:22:56.530 killing process with pid 97406 00:22:56.530 Received shutdown signal, test time was about 2.000000 seconds 00:22:56.530 00:22:56.530 Latency(us) 00:22:56.530 [2024-12-15T19:43:43.426Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.530 [2024-12-15T19:43:43.426Z] =================================================================================================================== 00:22:56.530 [2024-12-15T19:43:43.426Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:56.530 19:43:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:56.530 19:43:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:56.530 19:43:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97406' 00:22:56.530 19:43:43 -- common/autotest_common.sh@955 -- # kill 97406 00:22:56.530 19:43:43 -- common/autotest_common.sh@960 -- # wait 97406 00:22:56.789 19:43:43 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:22:56.789 19:43:43 -- host/digest.sh@77 -- # local rw bs qd 00:22:56.789 19:43:43 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:56.789 19:43:43 -- host/digest.sh@80 -- # rw=randwrite 00:22:56.789 19:43:43 -- host/digest.sh@80 -- # bs=131072 00:22:56.789 19:43:43 -- host/digest.sh@80 -- # qd=16 00:22:56.789 19:43:43 -- host/digest.sh@82 -- # bperfpid=97478 00:22:56.789 19:43:43 -- host/digest.sh@83 -- # waitforlisten 97478 /var/tmp/bperf.sock 00:22:56.789 19:43:43 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:56.789 19:43:43 -- common/autotest_common.sh@829 -- # '[' -z 97478 ']' 00:22:56.789 19:43:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:56.789 19:43:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:56.789 19:43:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:56.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:56.789 19:43:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:56.789 19:43:43 -- common/autotest_common.sh@10 -- # set +x 00:22:56.789 [2024-12-15 19:43:43.526787] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:56.789 [2024-12-15 19:43:43.527652] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97478 ] 00:22:56.789 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:56.789 Zero copy mechanism will not be used. 00:22:56.789 [2024-12-15 19:43:43.664656] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.048 [2024-12-15 19:43:43.731846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:57.048 19:43:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:57.048 19:43:43 -- common/autotest_common.sh@862 -- # return 0 00:22:57.048 19:43:43 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:57.048 19:43:43 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:57.048 19:43:43 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:57.615 19:43:44 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:57.615 19:43:44 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:57.874 nvme0n1 00:22:57.874 19:43:44 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:57.874 19:43:44 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:57.874 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:57.874 Zero copy mechanism will not be used. 00:22:57.874 Running I/O for 2 seconds... 00:22:59.829 00:22:59.829 Latency(us) 00:22:59.829 [2024-12-15T19:43:46.725Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.829 [2024-12-15T19:43:46.725Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:59.829 nvme0n1 : 2.00 8052.96 1006.62 0.00 0.00 1982.63 1727.77 7983.48 00:22:59.829 [2024-12-15T19:43:46.725Z] =================================================================================================================== 00:22:59.829 [2024-12-15T19:43:46.725Z] Total : 8052.96 1006.62 0.00 0.00 1982.63 1727.77 7983.48 00:22:59.829 0 00:22:59.829 19:43:46 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:59.829 19:43:46 -- host/digest.sh@92 -- # get_accel_stats 00:22:59.829 19:43:46 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:59.829 19:43:46 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:59.829 | select(.opcode=="crc32c") 00:22:59.829 | "\(.module_name) \(.executed)"' 00:22:59.829 19:43:46 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:00.396 19:43:46 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:23:00.396 19:43:46 -- host/digest.sh@93 -- # exp_module=software 00:23:00.396 19:43:46 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:23:00.396 19:43:46 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:00.396 19:43:46 -- host/digest.sh@97 -- # killprocess 97478 00:23:00.396 19:43:46 -- common/autotest_common.sh@936 -- # '[' -z 97478 ']' 00:23:00.396 19:43:46 -- common/autotest_common.sh@940 -- # kill -0 97478 00:23:00.396 19:43:46 -- common/autotest_common.sh@941 -- # uname 00:23:00.396 19:43:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:00.396 19:43:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97478 00:23:00.396 19:43:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:00.396 19:43:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:00.396 killing process with pid 97478 00:23:00.396 19:43:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97478' 00:23:00.396 19:43:47 -- common/autotest_common.sh@955 -- # kill 97478 00:23:00.396 Received shutdown signal, test time was about 2.000000 seconds 00:23:00.396 00:23:00.396 Latency(us) 00:23:00.396 [2024-12-15T19:43:47.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.397 [2024-12-15T19:43:47.293Z] =================================================================================================================== 00:23:00.397 [2024-12-15T19:43:47.293Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:00.397 19:43:47 -- common/autotest_common.sh@960 -- # wait 97478 00:23:00.655 19:43:47 -- host/digest.sh@126 -- # killprocess 97193 00:23:00.655 19:43:47 -- common/autotest_common.sh@936 -- # '[' -z 97193 ']' 00:23:00.655 19:43:47 -- common/autotest_common.sh@940 -- # kill -0 97193 00:23:00.655 19:43:47 -- common/autotest_common.sh@941 -- # uname 00:23:00.655 19:43:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:00.655 19:43:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97193 00:23:00.655 19:43:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:00.655 19:43:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:00.655 killing process with pid 97193 00:23:00.655 19:43:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97193' 00:23:00.655 19:43:47 -- common/autotest_common.sh@955 -- # kill 97193 00:23:00.655 19:43:47 -- common/autotest_common.sh@960 -- # wait 97193 00:23:00.914 00:23:00.914 real 0m17.121s 00:23:00.914 user 0m32.485s 00:23:00.914 sys 0m4.886s 00:23:00.914 19:43:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:00.914 19:43:47 -- common/autotest_common.sh@10 -- # set +x 00:23:00.914 ************************************ 00:23:00.914 END TEST nvmf_digest_clean 00:23:00.914 ************************************ 00:23:00.914 19:43:47 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:23:00.914 19:43:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:00.914 19:43:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:00.914 19:43:47 -- common/autotest_common.sh@10 -- # set +x 00:23:00.914 ************************************ 00:23:00.914 START TEST nvmf_digest_error 00:23:00.914 ************************************ 00:23:00.914 19:43:47 -- common/autotest_common.sh@1114 -- # run_digest_error 00:23:00.914 19:43:47 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:23:00.914 19:43:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:00.914 19:43:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:00.914 19:43:47 -- common/autotest_common.sh@10 -- # set +x 00:23:00.914 19:43:47 -- nvmf/common.sh@469 -- # nvmfpid=97589 00:23:00.914 19:43:47 -- nvmf/common.sh@470 -- # waitforlisten 97589 00:23:00.914 19:43:47 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:00.914 19:43:47 -- common/autotest_common.sh@829 -- # '[' -z 97589 ']' 00:23:00.914 19:43:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.914 19:43:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:00.914 19:43:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.914 19:43:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:00.914 19:43:47 -- common/autotest_common.sh@10 -- # set +x 00:23:00.914 [2024-12-15 19:43:47.699687] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:00.914 [2024-12-15 19:43:47.699763] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:01.172 [2024-12-15 19:43:47.834867] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.172 [2024-12-15 19:43:47.930016] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:01.172 [2024-12-15 19:43:47.930201] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:01.172 [2024-12-15 19:43:47.930220] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:01.172 [2024-12-15 19:43:47.930233] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:01.172 [2024-12-15 19:43:47.930273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.172 19:43:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:01.172 19:43:47 -- common/autotest_common.sh@862 -- # return 0 00:23:01.172 19:43:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:01.172 19:43:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:01.172 19:43:47 -- common/autotest_common.sh@10 -- # set +x 00:23:01.172 19:43:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:01.172 19:43:47 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:23:01.172 19:43:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.172 19:43:47 -- common/autotest_common.sh@10 -- # set +x 00:23:01.172 [2024-12-15 19:43:48.002841] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:23:01.172 19:43:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.172 19:43:48 -- host/digest.sh@104 -- # common_target_config 00:23:01.172 19:43:48 -- host/digest.sh@43 -- # rpc_cmd 00:23:01.172 19:43:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.172 19:43:48 -- common/autotest_common.sh@10 -- # set +x 00:23:01.431 null0 00:23:01.431 [2024-12-15 19:43:48.142808] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:01.431 [2024-12-15 19:43:48.167056] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:01.431 19:43:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.431 19:43:48 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:23:01.431 19:43:48 -- host/digest.sh@54 -- # local rw bs qd 00:23:01.431 19:43:48 -- host/digest.sh@56 -- # rw=randread 00:23:01.431 19:43:48 -- host/digest.sh@56 -- # bs=4096 00:23:01.431 19:43:48 -- host/digest.sh@56 -- # qd=128 00:23:01.431 19:43:48 -- host/digest.sh@58 -- # bperfpid=97614 00:23:01.431 19:43:48 -- host/digest.sh@60 -- # waitforlisten 97614 /var/tmp/bperf.sock 00:23:01.431 19:43:48 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:23:01.431 19:43:48 -- common/autotest_common.sh@829 -- # '[' -z 97614 ']' 00:23:01.431 19:43:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:01.431 19:43:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:01.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:01.431 19:43:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:01.431 19:43:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:01.431 19:43:48 -- common/autotest_common.sh@10 -- # set +x 00:23:01.431 [2024-12-15 19:43:48.219236] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:01.431 [2024-12-15 19:43:48.219336] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97614 ] 00:23:01.689 [2024-12-15 19:43:48.349606] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.689 [2024-12-15 19:43:48.444465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:02.624 19:43:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:02.624 19:43:49 -- common/autotest_common.sh@862 -- # return 0 00:23:02.624 19:43:49 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:02.624 19:43:49 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:02.624 19:43:49 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:02.624 19:43:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.624 19:43:49 -- common/autotest_common.sh@10 -- # set +x 00:23:02.624 19:43:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.624 19:43:49 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:02.624 19:43:49 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:02.883 nvme0n1 00:23:02.883 19:43:49 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:02.883 19:43:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.883 19:43:49 -- common/autotest_common.sh@10 -- # set +x 00:23:02.883 19:43:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.883 19:43:49 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:02.883 19:43:49 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:03.142 Running I/O for 2 seconds... 00:23:03.142 [2024-12-15 19:43:49.883289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.142 [2024-12-15 19:43:49.883351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-12-15 19:43:49.883367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.142 [2024-12-15 19:43:49.892465] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.142 [2024-12-15 19:43:49.892502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-12-15 19:43:49.892515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.142 [2024-12-15 19:43:49.905862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.142 [2024-12-15 19:43:49.905896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-12-15 19:43:49.905909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.142 [2024-12-15 19:43:49.915323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.142 [2024-12-15 19:43:49.915359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-12-15 19:43:49.915372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.142 [2024-12-15 19:43:49.925291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.142 [2024-12-15 19:43:49.925326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-12-15 19:43:49.925339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.142 [2024-12-15 19:43:49.936237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.142 [2024-12-15 19:43:49.936274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-12-15 19:43:49.936287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.142 [2024-12-15 19:43:49.946884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.142 [2024-12-15 19:43:49.946918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-12-15 19:43:49.946930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.142 [2024-12-15 19:43:49.957367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.142 [2024-12-15 19:43:49.957402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-12-15 19:43:49.957415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.142 [2024-12-15 19:43:49.968262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.142 [2024-12-15 19:43:49.968298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-12-15 19:43:49.968310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.142 [2024-12-15 19:43:49.978093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.142 [2024-12-15 19:43:49.978129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-12-15 19:43:49.978141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.142 [2024-12-15 19:43:49.988652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.142 [2024-12-15 19:43:49.988687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-12-15 19:43:49.988700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.142 [2024-12-15 19:43:49.998372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.142 [2024-12-15 19:43:49.998408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-12-15 19:43:49.998421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.142 [2024-12-15 19:43:50.011748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.142 [2024-12-15 19:43:50.011787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-12-15 19:43:50.011801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.142 [2024-12-15 19:43:50.022236] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.142 [2024-12-15 19:43:50.022274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-12-15 19:43:50.022287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.142 [2024-12-15 19:43:50.034940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.142 [2024-12-15 19:43:50.034976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-12-15 19:43:50.034989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.401 [2024-12-15 19:43:50.049256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.401 [2024-12-15 19:43:50.049297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.401 [2024-12-15 19:43:50.049310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.401 [2024-12-15 19:43:50.063046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.401 [2024-12-15 19:43:50.063083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.401 [2024-12-15 19:43:50.063096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.401 [2024-12-15 19:43:50.076088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.401 [2024-12-15 19:43:50.076125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.401 [2024-12-15 19:43:50.076137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.401 [2024-12-15 19:43:50.088814] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.401 [2024-12-15 19:43:50.088859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.401 [2024-12-15 19:43:50.088872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.401 [2024-12-15 19:43:50.102423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.401 [2024-12-15 19:43:50.102462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.401 [2024-12-15 19:43:50.102475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.401 [2024-12-15 19:43:50.114516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.401 [2024-12-15 19:43:50.114553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.401 [2024-12-15 19:43:50.114566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.401 [2024-12-15 19:43:50.123518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.401 [2024-12-15 19:43:50.123554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.401 [2024-12-15 19:43:50.123566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.401 [2024-12-15 19:43:50.135490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.401 [2024-12-15 19:43:50.135526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.401 [2024-12-15 19:43:50.135537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.401 [2024-12-15 19:43:50.149374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.401 [2024-12-15 19:43:50.149409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.401 [2024-12-15 19:43:50.149422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.402 [2024-12-15 19:43:50.162615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.402 [2024-12-15 19:43:50.162650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.402 [2024-12-15 19:43:50.162663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.402 [2024-12-15 19:43:50.175167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.402 [2024-12-15 19:43:50.175203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.402 [2024-12-15 19:43:50.175215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.402 [2024-12-15 19:43:50.186781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.402 [2024-12-15 19:43:50.186836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.402 [2024-12-15 19:43:50.186852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.402 [2024-12-15 19:43:50.198136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.402 [2024-12-15 19:43:50.198171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.402 [2024-12-15 19:43:50.198183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.402 [2024-12-15 19:43:50.209243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.402 [2024-12-15 19:43:50.209277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.402 [2024-12-15 19:43:50.209290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.402 [2024-12-15 19:43:50.220497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.402 [2024-12-15 19:43:50.220532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.402 [2024-12-15 19:43:50.220544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.402 [2024-12-15 19:43:50.229639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.402 [2024-12-15 19:43:50.229676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.402 [2024-12-15 19:43:50.229688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.402 [2024-12-15 19:43:50.241837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.402 [2024-12-15 19:43:50.241871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.402 [2024-12-15 19:43:50.241883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.402 [2024-12-15 19:43:50.256696] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.402 [2024-12-15 19:43:50.256732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.402 [2024-12-15 19:43:50.256744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.402 [2024-12-15 19:43:50.268417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.402 [2024-12-15 19:43:50.268461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.402 [2024-12-15 19:43:50.268473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.402 [2024-12-15 19:43:50.281771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.402 [2024-12-15 19:43:50.281806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.402 [2024-12-15 19:43:50.281827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.661 [2024-12-15 19:43:50.296404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.661 [2024-12-15 19:43:50.296440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.661 [2024-12-15 19:43:50.296452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.661 [2024-12-15 19:43:50.308997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.661 [2024-12-15 19:43:50.309042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:11915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.661 [2024-12-15 19:43:50.309054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.661 [2024-12-15 19:43:50.322548] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.661 [2024-12-15 19:43:50.322584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.661 [2024-12-15 19:43:50.322596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.661 [2024-12-15 19:43:50.334889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.661 [2024-12-15 19:43:50.334935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.661 [2024-12-15 19:43:50.334947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.661 [2024-12-15 19:43:50.348075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.661 [2024-12-15 19:43:50.348111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.661 [2024-12-15 19:43:50.348122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.661 [2024-12-15 19:43:50.361707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.661 [2024-12-15 19:43:50.361751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.661 [2024-12-15 19:43:50.361763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.661 [2024-12-15 19:43:50.373692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.661 [2024-12-15 19:43:50.373726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.661 [2024-12-15 19:43:50.373742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.661 [2024-12-15 19:43:50.387101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.661 [2024-12-15 19:43:50.387145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.661 [2024-12-15 19:43:50.387157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.661 [2024-12-15 19:43:50.400378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.661 [2024-12-15 19:43:50.400424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.661 [2024-12-15 19:43:50.400436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.661 [2024-12-15 19:43:50.411748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.661 [2024-12-15 19:43:50.411792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.661 [2024-12-15 19:43:50.411805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.661 [2024-12-15 19:43:50.423949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.661 [2024-12-15 19:43:50.423984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.661 [2024-12-15 19:43:50.423997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.661 [2024-12-15 19:43:50.437411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.661 [2024-12-15 19:43:50.437456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.661 [2024-12-15 19:43:50.437468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.661 [2024-12-15 19:43:50.450371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.661 [2024-12-15 19:43:50.450407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.661 [2024-12-15 19:43:50.450420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.661 [2024-12-15 19:43:50.459332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.661 [2024-12-15 19:43:50.459378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.661 [2024-12-15 19:43:50.459391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.661 [2024-12-15 19:43:50.472120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.661 [2024-12-15 19:43:50.472156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.661 [2024-12-15 19:43:50.472170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.661 [2024-12-15 19:43:50.484950] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.661 [2024-12-15 19:43:50.484997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.661 [2024-12-15 19:43:50.485009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.661 [2024-12-15 19:43:50.494990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.661 [2024-12-15 19:43:50.495025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.661 [2024-12-15 19:43:50.495038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.661 [2024-12-15 19:43:50.504754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.661 [2024-12-15 19:43:50.504801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.661 [2024-12-15 19:43:50.504826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.661 [2024-12-15 19:43:50.514274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.661 [2024-12-15 19:43:50.514321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.661 [2024-12-15 19:43:50.514341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.661 [2024-12-15 19:43:50.524961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.662 [2024-12-15 19:43:50.524996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.662 [2024-12-15 19:43:50.525009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.662 [2024-12-15 19:43:50.534935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.662 [2024-12-15 19:43:50.534970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.662 [2024-12-15 19:43:50.534982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.662 [2024-12-15 19:43:50.547226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.662 [2024-12-15 19:43:50.547260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.662 [2024-12-15 19:43:50.547273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.920 [2024-12-15 19:43:50.560433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.920 [2024-12-15 19:43:50.560468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.921 [2024-12-15 19:43:50.560480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.921 [2024-12-15 19:43:50.572975] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.921 [2024-12-15 19:43:50.573009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.921 [2024-12-15 19:43:50.573022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.921 [2024-12-15 19:43:50.585399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.921 [2024-12-15 19:43:50.585434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.921 [2024-12-15 19:43:50.585446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.921 [2024-12-15 19:43:50.594121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.921 [2024-12-15 19:43:50.594156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.921 [2024-12-15 19:43:50.594168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.921 [2024-12-15 19:43:50.609390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.921 [2024-12-15 19:43:50.609425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.921 [2024-12-15 19:43:50.609438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.921 [2024-12-15 19:43:50.619453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.921 [2024-12-15 19:43:50.619488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.921 [2024-12-15 19:43:50.619500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.921 [2024-12-15 19:43:50.631628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.921 [2024-12-15 19:43:50.631674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.921 [2024-12-15 19:43:50.631686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.921 [2024-12-15 19:43:50.644140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.921 [2024-12-15 19:43:50.644176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.921 [2024-12-15 19:43:50.644188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.921 [2024-12-15 19:43:50.656330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.921 [2024-12-15 19:43:50.656365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.921 [2024-12-15 19:43:50.656378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.921 [2024-12-15 19:43:50.669393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.921 [2024-12-15 19:43:50.669427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.921 [2024-12-15 19:43:50.669439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.921 [2024-12-15 19:43:50.677609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.921 [2024-12-15 19:43:50.677644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.921 [2024-12-15 19:43:50.677657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.921 [2024-12-15 19:43:50.690220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.921 [2024-12-15 19:43:50.690255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.921 [2024-12-15 19:43:50.690267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.921 [2024-12-15 19:43:50.702475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.921 [2024-12-15 19:43:50.702509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.921 [2024-12-15 19:43:50.702521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.921 [2024-12-15 19:43:50.715483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.921 [2024-12-15 19:43:50.715519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.921 [2024-12-15 19:43:50.715531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.921 [2024-12-15 19:43:50.727510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.921 [2024-12-15 19:43:50.727545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.921 [2024-12-15 19:43:50.727557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.921 [2024-12-15 19:43:50.738738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.921 [2024-12-15 19:43:50.738773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.921 [2024-12-15 19:43:50.738785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.921 [2024-12-15 19:43:50.749143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.921 [2024-12-15 19:43:50.749179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.921 [2024-12-15 19:43:50.749192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.921 [2024-12-15 19:43:50.757896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.921 [2024-12-15 19:43:50.757930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.921 [2024-12-15 19:43:50.757942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.921 [2024-12-15 19:43:50.768150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.921 [2024-12-15 19:43:50.768185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.921 [2024-12-15 19:43:50.768198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.921 [2024-12-15 19:43:50.779506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.921 [2024-12-15 19:43:50.779539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.921 [2024-12-15 19:43:50.779551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.921 [2024-12-15 19:43:50.790879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.921 [2024-12-15 19:43:50.790912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.921 [2024-12-15 19:43:50.790928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.921 [2024-12-15 19:43:50.803220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.921 [2024-12-15 19:43:50.803256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.921 [2024-12-15 19:43:50.803268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.921 [2024-12-15 19:43:50.811869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:03.921 [2024-12-15 19:43:50.811903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.921 [2024-12-15 19:43:50.811915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.181 [2024-12-15 19:43:50.822422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.181 [2024-12-15 19:43:50.822458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.181 [2024-12-15 19:43:50.822470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.181 [2024-12-15 19:43:50.834365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.181 [2024-12-15 19:43:50.834401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.181 [2024-12-15 19:43:50.834413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.181 [2024-12-15 19:43:50.845242] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.181 [2024-12-15 19:43:50.845278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.181 [2024-12-15 19:43:50.845291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.181 [2024-12-15 19:43:50.856464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.181 [2024-12-15 19:43:50.856497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.181 [2024-12-15 19:43:50.856509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.181 [2024-12-15 19:43:50.866317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.181 [2024-12-15 19:43:50.866359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.181 [2024-12-15 19:43:50.866372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.181 [2024-12-15 19:43:50.876014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.181 [2024-12-15 19:43:50.876066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.181 [2024-12-15 19:43:50.876079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.181 [2024-12-15 19:43:50.885428] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.181 [2024-12-15 19:43:50.885484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.181 [2024-12-15 19:43:50.885503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.181 [2024-12-15 19:43:50.897373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.181 [2024-12-15 19:43:50.897407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.181 [2024-12-15 19:43:50.897420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.181 [2024-12-15 19:43:50.907914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.181 [2024-12-15 19:43:50.907950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.181 [2024-12-15 19:43:50.907962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.181 [2024-12-15 19:43:50.918873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.181 [2024-12-15 19:43:50.918908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:85 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.181 [2024-12-15 19:43:50.918921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.181 [2024-12-15 19:43:50.928357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.181 [2024-12-15 19:43:50.928408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.181 [2024-12-15 19:43:50.928420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.181 [2024-12-15 19:43:50.938071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.181 [2024-12-15 19:43:50.938107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.181 [2024-12-15 19:43:50.938121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.181 [2024-12-15 19:43:50.948094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.181 [2024-12-15 19:43:50.948130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.181 [2024-12-15 19:43:50.948142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.181 [2024-12-15 19:43:50.957056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.181 [2024-12-15 19:43:50.957091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.181 [2024-12-15 19:43:50.957103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.181 [2024-12-15 19:43:50.966670] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.181 [2024-12-15 19:43:50.966714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.181 [2024-12-15 19:43:50.966726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.181 [2024-12-15 19:43:50.978231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.181 [2024-12-15 19:43:50.978265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.181 [2024-12-15 19:43:50.978277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.181 [2024-12-15 19:43:50.987809] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.181 [2024-12-15 19:43:50.987853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.181 [2024-12-15 19:43:50.987865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.181 [2024-12-15 19:43:50.999718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.181 [2024-12-15 19:43:50.999753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.181 [2024-12-15 19:43:50.999765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.181 [2024-12-15 19:43:51.011498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.181 [2024-12-15 19:43:51.011532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.181 [2024-12-15 19:43:51.011545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.181 [2024-12-15 19:43:51.023127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.181 [2024-12-15 19:43:51.023162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.181 [2024-12-15 19:43:51.023175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.181 [2024-12-15 19:43:51.034670] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.181 [2024-12-15 19:43:51.034716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.181 [2024-12-15 19:43:51.034729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.181 [2024-12-15 19:43:51.043565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.182 [2024-12-15 19:43:51.043599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.182 [2024-12-15 19:43:51.043610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.182 [2024-12-15 19:43:51.055623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.182 [2024-12-15 19:43:51.055656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.182 [2024-12-15 19:43:51.055668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.182 [2024-12-15 19:43:51.067400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.182 [2024-12-15 19:43:51.067454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.182 [2024-12-15 19:43:51.067466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.441 [2024-12-15 19:43:51.080116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.441 [2024-12-15 19:43:51.080151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.441 [2024-12-15 19:43:51.080163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.441 [2024-12-15 19:43:51.092068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.441 [2024-12-15 19:43:51.092111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.441 [2024-12-15 19:43:51.092123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.441 [2024-12-15 19:43:51.103850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.441 [2024-12-15 19:43:51.103882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.441 [2024-12-15 19:43:51.103894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.441 [2024-12-15 19:43:51.116551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.441 [2024-12-15 19:43:51.116587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.441 [2024-12-15 19:43:51.116599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.441 [2024-12-15 19:43:51.126090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.441 [2024-12-15 19:43:51.126136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.441 [2024-12-15 19:43:51.126148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.441 [2024-12-15 19:43:51.135582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.441 [2024-12-15 19:43:51.135616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.441 [2024-12-15 19:43:51.135628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.441 [2024-12-15 19:43:51.145342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.441 [2024-12-15 19:43:51.145396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.441 [2024-12-15 19:43:51.145408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.441 [2024-12-15 19:43:51.156424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.441 [2024-12-15 19:43:51.156459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.441 [2024-12-15 19:43:51.156471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.441 [2024-12-15 19:43:51.169151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.441 [2024-12-15 19:43:51.169186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.441 [2024-12-15 19:43:51.169198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.441 [2024-12-15 19:43:51.180855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.441 [2024-12-15 19:43:51.180899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.441 [2024-12-15 19:43:51.180919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.441 [2024-12-15 19:43:51.192695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.442 [2024-12-15 19:43:51.192730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.442 [2024-12-15 19:43:51.192742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.442 [2024-12-15 19:43:51.201889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.442 [2024-12-15 19:43:51.201935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.442 [2024-12-15 19:43:51.201947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.442 [2024-12-15 19:43:51.211715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.442 [2024-12-15 19:43:51.211758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.442 [2024-12-15 19:43:51.211770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.442 [2024-12-15 19:43:51.221221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.442 [2024-12-15 19:43:51.221257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.442 [2024-12-15 19:43:51.221270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.442 [2024-12-15 19:43:51.232486] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.442 [2024-12-15 19:43:51.232532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.442 [2024-12-15 19:43:51.232544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.442 [2024-12-15 19:43:51.243836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.442 [2024-12-15 19:43:51.243869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.442 [2024-12-15 19:43:51.243883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.442 [2024-12-15 19:43:51.254853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.442 [2024-12-15 19:43:51.254899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.442 [2024-12-15 19:43:51.254912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.442 [2024-12-15 19:43:51.264892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.442 [2024-12-15 19:43:51.264936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.442 [2024-12-15 19:43:51.264958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.442 [2024-12-15 19:43:51.275375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.442 [2024-12-15 19:43:51.275420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.442 [2024-12-15 19:43:51.275433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.442 [2024-12-15 19:43:51.284689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.442 [2024-12-15 19:43:51.284722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.442 [2024-12-15 19:43:51.284739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.442 [2024-12-15 19:43:51.297154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.442 [2024-12-15 19:43:51.297216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.442 [2024-12-15 19:43:51.297229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.442 [2024-12-15 19:43:51.310120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.442 [2024-12-15 19:43:51.310165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.442 [2024-12-15 19:43:51.310177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.442 [2024-12-15 19:43:51.322644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.442 [2024-12-15 19:43:51.322694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.442 [2024-12-15 19:43:51.322706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.442 [2024-12-15 19:43:51.335099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.442 [2024-12-15 19:43:51.335133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.442 [2024-12-15 19:43:51.335144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.701 [2024-12-15 19:43:51.348192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.701 [2024-12-15 19:43:51.348226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.701 [2024-12-15 19:43:51.348237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.701 [2024-12-15 19:43:51.359384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.701 [2024-12-15 19:43:51.359435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.701 [2024-12-15 19:43:51.359447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.701 [2024-12-15 19:43:51.369088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.701 [2024-12-15 19:43:51.369124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.701 [2024-12-15 19:43:51.369136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.701 [2024-12-15 19:43:51.381066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.701 [2024-12-15 19:43:51.381101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.701 [2024-12-15 19:43:51.381113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.701 [2024-12-15 19:43:51.393353] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.701 [2024-12-15 19:43:51.393387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.701 [2024-12-15 19:43:51.393399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.701 [2024-12-15 19:43:51.403948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.701 [2024-12-15 19:43:51.403994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.701 [2024-12-15 19:43:51.404007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.701 [2024-12-15 19:43:51.413509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.701 [2024-12-15 19:43:51.413544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.701 [2024-12-15 19:43:51.413556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.701 [2024-12-15 19:43:51.423712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.701 [2024-12-15 19:43:51.423746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.701 [2024-12-15 19:43:51.423758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.701 [2024-12-15 19:43:51.434480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.701 [2024-12-15 19:43:51.434515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.701 [2024-12-15 19:43:51.434528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.701 [2024-12-15 19:43:51.444308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.701 [2024-12-15 19:43:51.444342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.701 [2024-12-15 19:43:51.444354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.701 [2024-12-15 19:43:51.455238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.702 [2024-12-15 19:43:51.455272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.702 [2024-12-15 19:43:51.455284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.702 [2024-12-15 19:43:51.464764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.702 [2024-12-15 19:43:51.464797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.702 [2024-12-15 19:43:51.464809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.702 [2024-12-15 19:43:51.476650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.702 [2024-12-15 19:43:51.476684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.702 [2024-12-15 19:43:51.476696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.702 [2024-12-15 19:43:51.486377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.702 [2024-12-15 19:43:51.486410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.702 [2024-12-15 19:43:51.486422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.702 [2024-12-15 19:43:51.495483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.702 [2024-12-15 19:43:51.495517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.702 [2024-12-15 19:43:51.495529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.702 [2024-12-15 19:43:51.507209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.702 [2024-12-15 19:43:51.507242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.702 [2024-12-15 19:43:51.507254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.702 [2024-12-15 19:43:51.519341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.702 [2024-12-15 19:43:51.519390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.702 [2024-12-15 19:43:51.519403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.702 [2024-12-15 19:43:51.531367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.702 [2024-12-15 19:43:51.531401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.702 [2024-12-15 19:43:51.531414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.702 [2024-12-15 19:43:51.543339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.702 [2024-12-15 19:43:51.543375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.702 [2024-12-15 19:43:51.543403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.702 [2024-12-15 19:43:51.555366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.702 [2024-12-15 19:43:51.555400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.702 [2024-12-15 19:43:51.555412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.702 [2024-12-15 19:43:51.563135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.702 [2024-12-15 19:43:51.563169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.702 [2024-12-15 19:43:51.563181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.702 [2024-12-15 19:43:51.575703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.702 [2024-12-15 19:43:51.575747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.702 [2024-12-15 19:43:51.575759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.702 [2024-12-15 19:43:51.587462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.702 [2024-12-15 19:43:51.587508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.702 [2024-12-15 19:43:51.587520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.961 [2024-12-15 19:43:51.598996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.961 [2024-12-15 19:43:51.599030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.961 [2024-12-15 19:43:51.599057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.961 [2024-12-15 19:43:51.608586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.961 [2024-12-15 19:43:51.608619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.961 [2024-12-15 19:43:51.608631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.961 [2024-12-15 19:43:51.617827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.961 [2024-12-15 19:43:51.617874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.961 [2024-12-15 19:43:51.617886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.961 [2024-12-15 19:43:51.627432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.961 [2024-12-15 19:43:51.627466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.961 [2024-12-15 19:43:51.627477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.961 [2024-12-15 19:43:51.637001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.961 [2024-12-15 19:43:51.637070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.961 [2024-12-15 19:43:51.637082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.961 [2024-12-15 19:43:51.646106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.962 [2024-12-15 19:43:51.646140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.962 [2024-12-15 19:43:51.646152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.962 [2024-12-15 19:43:51.655362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.962 [2024-12-15 19:43:51.655403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.962 [2024-12-15 19:43:51.655415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.962 [2024-12-15 19:43:51.667228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.962 [2024-12-15 19:43:51.667262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.962 [2024-12-15 19:43:51.667275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.962 [2024-12-15 19:43:51.679528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.962 [2024-12-15 19:43:51.679563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.962 [2024-12-15 19:43:51.679575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.962 [2024-12-15 19:43:51.691401] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.962 [2024-12-15 19:43:51.691444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.962 [2024-12-15 19:43:51.691456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.962 [2024-12-15 19:43:51.700520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.962 [2024-12-15 19:43:51.700555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.962 [2024-12-15 19:43:51.700566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.962 [2024-12-15 19:43:51.709873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.962 [2024-12-15 19:43:51.709906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.962 [2024-12-15 19:43:51.709918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.962 [2024-12-15 19:43:51.719231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.962 [2024-12-15 19:43:51.719264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.962 [2024-12-15 19:43:51.719276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.962 [2024-12-15 19:43:51.729933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.962 [2024-12-15 19:43:51.729966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.962 [2024-12-15 19:43:51.729979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.962 [2024-12-15 19:43:51.737963] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.962 [2024-12-15 19:43:51.737996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.962 [2024-12-15 19:43:51.738008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.962 [2024-12-15 19:43:51.747744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.962 [2024-12-15 19:43:51.747779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.962 [2024-12-15 19:43:51.747791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.962 [2024-12-15 19:43:51.759041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.962 [2024-12-15 19:43:51.759077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.962 [2024-12-15 19:43:51.759090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.962 [2024-12-15 19:43:51.768501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.962 [2024-12-15 19:43:51.768547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.962 [2024-12-15 19:43:51.768560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.962 [2024-12-15 19:43:51.777711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.962 [2024-12-15 19:43:51.777746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.962 [2024-12-15 19:43:51.777758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.962 [2024-12-15 19:43:51.786933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.962 [2024-12-15 19:43:51.786978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.962 [2024-12-15 19:43:51.786990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.962 [2024-12-15 19:43:51.797374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.962 [2024-12-15 19:43:51.797420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.962 [2024-12-15 19:43:51.797432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.962 [2024-12-15 19:43:51.808438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.962 [2024-12-15 19:43:51.808472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.962 [2024-12-15 19:43:51.808485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.962 [2024-12-15 19:43:51.818249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.962 [2024-12-15 19:43:51.818284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.962 [2024-12-15 19:43:51.818296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.962 [2024-12-15 19:43:51.830935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.962 [2024-12-15 19:43:51.830982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.962 [2024-12-15 19:43:51.830994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.962 [2024-12-15 19:43:51.842890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.962 [2024-12-15 19:43:51.842932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.962 [2024-12-15 19:43:51.842944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.962 [2024-12-15 19:43:51.854882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:04.962 [2024-12-15 19:43:51.854923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.962 [2024-12-15 19:43:51.854936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.221 [2024-12-15 19:43:51.866191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd737f0) 00:23:05.221 [2024-12-15 19:43:51.866235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.221 [2024-12-15 19:43:51.866247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.221 00:23:05.221 Latency(us) 00:23:05.221 [2024-12-15T19:43:52.117Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.221 [2024-12-15T19:43:52.117Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:05.221 nvme0n1 : 2.00 22689.85 88.63 0.00 0.00 5636.35 2308.65 17277.67 00:23:05.221 [2024-12-15T19:43:52.117Z] =================================================================================================================== 00:23:05.221 [2024-12-15T19:43:52.117Z] Total : 22689.85 88.63 0.00 0.00 5636.35 2308.65 17277.67 00:23:05.221 0 00:23:05.221 19:43:51 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:05.221 19:43:51 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:05.221 19:43:51 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:05.221 | .driver_specific 00:23:05.221 | .nvme_error 00:23:05.221 | .status_code 00:23:05.221 | .command_transient_transport_error' 00:23:05.221 19:43:51 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:05.480 19:43:52 -- host/digest.sh@71 -- # (( 178 > 0 )) 00:23:05.480 19:43:52 -- host/digest.sh@73 -- # killprocess 97614 00:23:05.480 19:43:52 -- common/autotest_common.sh@936 -- # '[' -z 97614 ']' 00:23:05.480 19:43:52 -- common/autotest_common.sh@940 -- # kill -0 97614 00:23:05.480 19:43:52 -- common/autotest_common.sh@941 -- # uname 00:23:05.480 19:43:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:05.480 19:43:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97614 00:23:05.480 19:43:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:05.480 19:43:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:05.480 killing process with pid 97614 00:23:05.480 19:43:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97614' 00:23:05.480 19:43:52 -- common/autotest_common.sh@955 -- # kill 97614 00:23:05.480 Received shutdown signal, test time was about 2.000000 seconds 00:23:05.480 00:23:05.480 Latency(us) 00:23:05.480 [2024-12-15T19:43:52.376Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.480 [2024-12-15T19:43:52.376Z] =================================================================================================================== 00:23:05.480 [2024-12-15T19:43:52.376Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:05.480 19:43:52 -- common/autotest_common.sh@960 -- # wait 97614 00:23:05.739 19:43:52 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:23:05.739 19:43:52 -- host/digest.sh@54 -- # local rw bs qd 00:23:05.739 19:43:52 -- host/digest.sh@56 -- # rw=randread 00:23:05.739 19:43:52 -- host/digest.sh@56 -- # bs=131072 00:23:05.739 19:43:52 -- host/digest.sh@56 -- # qd=16 00:23:05.739 19:43:52 -- host/digest.sh@58 -- # bperfpid=97704 00:23:05.739 19:43:52 -- host/digest.sh@60 -- # waitforlisten 97704 /var/tmp/bperf.sock 00:23:05.739 19:43:52 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:23:05.739 19:43:52 -- common/autotest_common.sh@829 -- # '[' -z 97704 ']' 00:23:05.739 19:43:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:05.739 19:43:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:05.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:05.739 19:43:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:05.739 19:43:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:05.739 19:43:52 -- common/autotest_common.sh@10 -- # set +x 00:23:05.739 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:05.739 Zero copy mechanism will not be used. 00:23:05.739 [2024-12-15 19:43:52.596529] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:05.739 [2024-12-15 19:43:52.596655] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97704 ] 00:23:05.998 [2024-12-15 19:43:52.730694] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.998 [2024-12-15 19:43:52.807860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:06.934 19:43:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:06.934 19:43:53 -- common/autotest_common.sh@862 -- # return 0 00:23:06.934 19:43:53 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:06.934 19:43:53 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:07.192 19:43:53 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:07.192 19:43:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.192 19:43:53 -- common/autotest_common.sh@10 -- # set +x 00:23:07.192 19:43:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.192 19:43:53 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:07.192 19:43:53 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:07.451 nvme0n1 00:23:07.451 19:43:54 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:07.451 19:43:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.451 19:43:54 -- common/autotest_common.sh@10 -- # set +x 00:23:07.451 19:43:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.451 19:43:54 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:07.451 19:43:54 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:07.711 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:07.711 Zero copy mechanism will not be used. 00:23:07.711 Running I/O for 2 seconds... 00:23:07.711 [2024-12-15 19:43:54.399321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.711 [2024-12-15 19:43:54.399384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.711 [2024-12-15 19:43:54.399399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.711 [2024-12-15 19:43:54.402381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.711 [2024-12-15 19:43:54.402414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.711 [2024-12-15 19:43:54.402426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.711 [2024-12-15 19:43:54.405653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.711 [2024-12-15 19:43:54.405683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.711 [2024-12-15 19:43:54.405694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.711 [2024-12-15 19:43:54.409611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.711 [2024-12-15 19:43:54.409660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.711 [2024-12-15 19:43:54.409679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.711 [2024-12-15 19:43:54.413526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.711 [2024-12-15 19:43:54.413558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.711 [2024-12-15 19:43:54.413574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.711 [2024-12-15 19:43:54.416639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.711 [2024-12-15 19:43:54.416671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.711 [2024-12-15 19:43:54.416682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.711 [2024-12-15 19:43:54.420021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.711 [2024-12-15 19:43:54.420063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.711 [2024-12-15 19:43:54.420075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.711 [2024-12-15 19:43:54.423394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.711 [2024-12-15 19:43:54.423427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.711 [2024-12-15 19:43:54.423438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.711 [2024-12-15 19:43:54.426647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.711 [2024-12-15 19:43:54.426679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.711 [2024-12-15 19:43:54.426691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.711 [2024-12-15 19:43:54.430667] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.711 [2024-12-15 19:43:54.430699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.711 [2024-12-15 19:43:54.430710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.711 [2024-12-15 19:43:54.433402] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.711 [2024-12-15 19:43:54.433436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.711 [2024-12-15 19:43:54.433447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.711 [2024-12-15 19:43:54.436437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.711 [2024-12-15 19:43:54.436466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.711 [2024-12-15 19:43:54.436479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.711 [2024-12-15 19:43:54.440017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.711 [2024-12-15 19:43:54.440063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.711 [2024-12-15 19:43:54.440074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.711 [2024-12-15 19:43:54.443576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.711 [2024-12-15 19:43:54.443607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.711 [2024-12-15 19:43:54.443618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.711 [2024-12-15 19:43:54.446977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.711 [2024-12-15 19:43:54.447008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.711 [2024-12-15 19:43:54.447020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.711 [2024-12-15 19:43:54.450550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.711 [2024-12-15 19:43:54.450583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.711 [2024-12-15 19:43:54.450595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.711 [2024-12-15 19:43:54.454522] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.711 [2024-12-15 19:43:54.454556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.711 [2024-12-15 19:43:54.454567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.711 [2024-12-15 19:43:54.457825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.711 [2024-12-15 19:43:54.457872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.711 [2024-12-15 19:43:54.457884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.711 [2024-12-15 19:43:54.461268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.711 [2024-12-15 19:43:54.461299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.711 [2024-12-15 19:43:54.461310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.711 [2024-12-15 19:43:54.464885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.711 [2024-12-15 19:43:54.464916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.711 [2024-12-15 19:43:54.464928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.711 [2024-12-15 19:43:54.468242] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.711 [2024-12-15 19:43:54.468273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.711 [2024-12-15 19:43:54.468284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.711 [2024-12-15 19:43:54.471394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.711 [2024-12-15 19:43:54.471425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.711 [2024-12-15 19:43:54.471436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.711 [2024-12-15 19:43:54.474195] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.711 [2024-12-15 19:43:54.474225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.711 [2024-12-15 19:43:54.474236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.711 [2024-12-15 19:43:54.476996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.711 [2024-12-15 19:43:54.477026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.712 [2024-12-15 19:43:54.477039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.712 [2024-12-15 19:43:54.480434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.712 [2024-12-15 19:43:54.480464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.712 [2024-12-15 19:43:54.480476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.712 [2024-12-15 19:43:54.483805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.712 [2024-12-15 19:43:54.483848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.712 [2024-12-15 19:43:54.483860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.712 [2024-12-15 19:43:54.487011] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.712 [2024-12-15 19:43:54.487041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.712 [2024-12-15 19:43:54.487052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.712 [2024-12-15 19:43:54.490136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.712 [2024-12-15 19:43:54.490166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.712 [2024-12-15 19:43:54.490177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.712 [2024-12-15 19:43:54.493083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.712 [2024-12-15 19:43:54.493114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.712 [2024-12-15 19:43:54.493126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.712 [2024-12-15 19:43:54.496492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.712 [2024-12-15 19:43:54.496524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.712 [2024-12-15 19:43:54.496536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.712 [2024-12-15 19:43:54.499478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.712 [2024-12-15 19:43:54.499509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.712 [2024-12-15 19:43:54.499521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.712 [2024-12-15 19:43:54.502378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.712 [2024-12-15 19:43:54.502408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.712 [2024-12-15 19:43:54.502420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.712 [2024-12-15 19:43:54.505443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.712 [2024-12-15 19:43:54.505474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.712 [2024-12-15 19:43:54.505486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.712 [2024-12-15 19:43:54.508439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.712 [2024-12-15 19:43:54.508470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.712 [2024-12-15 19:43:54.508481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.712 [2024-12-15 19:43:54.512024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.712 [2024-12-15 19:43:54.512056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.712 [2024-12-15 19:43:54.512068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.712 [2024-12-15 19:43:54.515102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.712 [2024-12-15 19:43:54.515134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.712 [2024-12-15 19:43:54.515146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.712 [2024-12-15 19:43:54.518536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.712 [2024-12-15 19:43:54.518568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.712 [2024-12-15 19:43:54.518580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.712 [2024-12-15 19:43:54.522046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.712 [2024-12-15 19:43:54.522077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.712 [2024-12-15 19:43:54.522089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.712 [2024-12-15 19:43:54.525057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.712 [2024-12-15 19:43:54.525089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.712 [2024-12-15 19:43:54.525100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.712 [2024-12-15 19:43:54.528434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.712 [2024-12-15 19:43:54.528466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.712 [2024-12-15 19:43:54.528477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.712 [2024-12-15 19:43:54.532081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.712 [2024-12-15 19:43:54.532114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.712 [2024-12-15 19:43:54.532125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.712 [2024-12-15 19:43:54.534950] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.712 [2024-12-15 19:43:54.534981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.712 [2024-12-15 19:43:54.534993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.712 [2024-12-15 19:43:54.538286] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.712 [2024-12-15 19:43:54.538333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.712 [2024-12-15 19:43:54.538354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.712 [2024-12-15 19:43:54.541139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.712 [2024-12-15 19:43:54.541171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.712 [2024-12-15 19:43:54.541182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.712 [2024-12-15 19:43:54.544981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.712 [2024-12-15 19:43:54.545015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.712 [2024-12-15 19:43:54.545026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.712 [2024-12-15 19:43:54.548601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.712 [2024-12-15 19:43:54.548633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.712 [2024-12-15 19:43:54.548645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.712 [2024-12-15 19:43:54.552361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.712 [2024-12-15 19:43:54.552393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.712 [2024-12-15 19:43:54.552404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.712 [2024-12-15 19:43:54.555374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.712 [2024-12-15 19:43:54.555405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.712 [2024-12-15 19:43:54.555417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.712 [2024-12-15 19:43:54.558678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.712 [2024-12-15 19:43:54.558710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.712 [2024-12-15 19:43:54.558721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.712 [2024-12-15 19:43:54.561975] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.712 [2024-12-15 19:43:54.562006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.712 [2024-12-15 19:43:54.562017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.712 [2024-12-15 19:43:54.565310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.712 [2024-12-15 19:43:54.565341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.712 [2024-12-15 19:43:54.565353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.712 [2024-12-15 19:43:54.568837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.713 [2024-12-15 19:43:54.568869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.713 [2024-12-15 19:43:54.568880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.713 [2024-12-15 19:43:54.572243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.713 [2024-12-15 19:43:54.572274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.713 [2024-12-15 19:43:54.572286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.713 [2024-12-15 19:43:54.575150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.713 [2024-12-15 19:43:54.575182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.713 [2024-12-15 19:43:54.575193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.713 [2024-12-15 19:43:54.578131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.713 [2024-12-15 19:43:54.578164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.713 [2024-12-15 19:43:54.578175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.713 [2024-12-15 19:43:54.581456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.713 [2024-12-15 19:43:54.581486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.713 [2024-12-15 19:43:54.581498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.713 [2024-12-15 19:43:54.584351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.713 [2024-12-15 19:43:54.584382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.713 [2024-12-15 19:43:54.584394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.713 [2024-12-15 19:43:54.587628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.713 [2024-12-15 19:43:54.587660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.713 [2024-12-15 19:43:54.587671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.713 [2024-12-15 19:43:54.590907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.713 [2024-12-15 19:43:54.590937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.713 [2024-12-15 19:43:54.590949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.713 [2024-12-15 19:43:54.594229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.713 [2024-12-15 19:43:54.594265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.713 [2024-12-15 19:43:54.594278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.713 [2024-12-15 19:43:54.597351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.713 [2024-12-15 19:43:54.597383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.713 [2024-12-15 19:43:54.597395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.713 [2024-12-15 19:43:54.600651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.713 [2024-12-15 19:43:54.600684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.713 [2024-12-15 19:43:54.600696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.713 [2024-12-15 19:43:54.603805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.713 [2024-12-15 19:43:54.603849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.713 [2024-12-15 19:43:54.603861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.974 [2024-12-15 19:43:54.607069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.974 [2024-12-15 19:43:54.607100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.974 [2024-12-15 19:43:54.607112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.974 [2024-12-15 19:43:54.609197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.974 [2024-12-15 19:43:54.609226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.974 [2024-12-15 19:43:54.609237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.974 [2024-12-15 19:43:54.612720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.974 [2024-12-15 19:43:54.612753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.974 [2024-12-15 19:43:54.612764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.974 [2024-12-15 19:43:54.615848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.974 [2024-12-15 19:43:54.615879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.974 [2024-12-15 19:43:54.615891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.974 [2024-12-15 19:43:54.619007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.974 [2024-12-15 19:43:54.619039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.974 [2024-12-15 19:43:54.619050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.974 [2024-12-15 19:43:54.622433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.974 [2024-12-15 19:43:54.622465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.974 [2024-12-15 19:43:54.622476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.974 [2024-12-15 19:43:54.625536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.974 [2024-12-15 19:43:54.625567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.974 [2024-12-15 19:43:54.625578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.974 [2024-12-15 19:43:54.628734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.974 [2024-12-15 19:43:54.628764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.974 [2024-12-15 19:43:54.628777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.974 [2024-12-15 19:43:54.631783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.974 [2024-12-15 19:43:54.631825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.974 [2024-12-15 19:43:54.631839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.974 [2024-12-15 19:43:54.634724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.974 [2024-12-15 19:43:54.634755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.974 [2024-12-15 19:43:54.634767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.974 [2024-12-15 19:43:54.638008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.974 [2024-12-15 19:43:54.638038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.974 [2024-12-15 19:43:54.638049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.974 [2024-12-15 19:43:54.640950] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.974 [2024-12-15 19:43:54.640980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.974 [2024-12-15 19:43:54.640991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.974 [2024-12-15 19:43:54.644218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.974 [2024-12-15 19:43:54.644248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.974 [2024-12-15 19:43:54.644260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.974 [2024-12-15 19:43:54.647200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.974 [2024-12-15 19:43:54.647230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.974 [2024-12-15 19:43:54.647241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.974 [2024-12-15 19:43:54.650483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.974 [2024-12-15 19:43:54.650513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.974 [2024-12-15 19:43:54.650525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.974 [2024-12-15 19:43:54.654039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.974 [2024-12-15 19:43:54.654070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.974 [2024-12-15 19:43:54.654082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.974 [2024-12-15 19:43:54.657572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.974 [2024-12-15 19:43:54.657605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.974 [2024-12-15 19:43:54.657616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.974 [2024-12-15 19:43:54.661040] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.974 [2024-12-15 19:43:54.661071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.974 [2024-12-15 19:43:54.661083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.974 [2024-12-15 19:43:54.664516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.974 [2024-12-15 19:43:54.664547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.974 [2024-12-15 19:43:54.664559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.974 [2024-12-15 19:43:54.668002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.974 [2024-12-15 19:43:54.668035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.974 [2024-12-15 19:43:54.668046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.974 [2024-12-15 19:43:54.671718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.974 [2024-12-15 19:43:54.671751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.974 [2024-12-15 19:43:54.671763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.974 [2024-12-15 19:43:54.674963] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.974 [2024-12-15 19:43:54.674994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.974 [2024-12-15 19:43:54.675005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.974 [2024-12-15 19:43:54.678062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.974 [2024-12-15 19:43:54.678093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.974 [2024-12-15 19:43:54.678104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.974 [2024-12-15 19:43:54.681344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.974 [2024-12-15 19:43:54.681377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.974 [2024-12-15 19:43:54.681388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.974 [2024-12-15 19:43:54.684625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.975 [2024-12-15 19:43:54.684658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.975 [2024-12-15 19:43:54.684669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.975 [2024-12-15 19:43:54.687893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.975 [2024-12-15 19:43:54.687925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.975 [2024-12-15 19:43:54.687936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.975 [2024-12-15 19:43:54.691036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.975 [2024-12-15 19:43:54.691067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.975 [2024-12-15 19:43:54.691080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.975 [2024-12-15 19:43:54.694157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.975 [2024-12-15 19:43:54.694187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.975 [2024-12-15 19:43:54.694199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.975 [2024-12-15 19:43:54.697398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.975 [2024-12-15 19:43:54.697428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.975 [2024-12-15 19:43:54.697440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.975 [2024-12-15 19:43:54.699980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.975 [2024-12-15 19:43:54.700012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.975 [2024-12-15 19:43:54.700024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.975 [2024-12-15 19:43:54.702897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.975 [2024-12-15 19:43:54.702927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.975 [2024-12-15 19:43:54.702938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.975 [2024-12-15 19:43:54.706121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.975 [2024-12-15 19:43:54.706151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.975 [2024-12-15 19:43:54.706163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.975 [2024-12-15 19:43:54.709421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.975 [2024-12-15 19:43:54.709451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.975 [2024-12-15 19:43:54.709462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.975 [2024-12-15 19:43:54.712369] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.975 [2024-12-15 19:43:54.712400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.975 [2024-12-15 19:43:54.712412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.975 [2024-12-15 19:43:54.715575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.975 [2024-12-15 19:43:54.715604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.975 [2024-12-15 19:43:54.715615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.975 [2024-12-15 19:43:54.718845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.975 [2024-12-15 19:43:54.718875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.975 [2024-12-15 19:43:54.718886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.975 [2024-12-15 19:43:54.721741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.975 [2024-12-15 19:43:54.721770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.975 [2024-12-15 19:43:54.721782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.975 [2024-12-15 19:43:54.725038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.975 [2024-12-15 19:43:54.725069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.975 [2024-12-15 19:43:54.725080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.975 [2024-12-15 19:43:54.727982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.975 [2024-12-15 19:43:54.728012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.975 [2024-12-15 19:43:54.728023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.975 [2024-12-15 19:43:54.731276] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.975 [2024-12-15 19:43:54.731306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.975 [2024-12-15 19:43:54.731317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.975 [2024-12-15 19:43:54.734546] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.975 [2024-12-15 19:43:54.734578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.975 [2024-12-15 19:43:54.734590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.975 [2024-12-15 19:43:54.737622] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.975 [2024-12-15 19:43:54.737653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.975 [2024-12-15 19:43:54.737664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.975 [2024-12-15 19:43:54.740759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.975 [2024-12-15 19:43:54.740791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.975 [2024-12-15 19:43:54.740803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.975 [2024-12-15 19:43:54.743988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.975 [2024-12-15 19:43:54.744019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.975 [2024-12-15 19:43:54.744031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.975 [2024-12-15 19:43:54.746876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.975 [2024-12-15 19:43:54.746906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.975 [2024-12-15 19:43:54.746917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.975 [2024-12-15 19:43:54.750054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.975 [2024-12-15 19:43:54.750083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.975 [2024-12-15 19:43:54.750094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.975 [2024-12-15 19:43:54.753390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.975 [2024-12-15 19:43:54.753421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.975 [2024-12-15 19:43:54.753432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.975 [2024-12-15 19:43:54.756360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.975 [2024-12-15 19:43:54.756390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.975 [2024-12-15 19:43:54.756402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.975 [2024-12-15 19:43:54.760106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.975 [2024-12-15 19:43:54.760137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.975 [2024-12-15 19:43:54.760148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.975 [2024-12-15 19:43:54.764197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.975 [2024-12-15 19:43:54.764229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.975 [2024-12-15 19:43:54.764241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.975 [2024-12-15 19:43:54.767178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.975 [2024-12-15 19:43:54.767208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.975 [2024-12-15 19:43:54.767219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.975 [2024-12-15 19:43:54.770455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.975 [2024-12-15 19:43:54.770485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.975 [2024-12-15 19:43:54.770497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.976 [2024-12-15 19:43:54.773573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.976 [2024-12-15 19:43:54.773605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.976 [2024-12-15 19:43:54.773616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.976 [2024-12-15 19:43:54.776341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.976 [2024-12-15 19:43:54.776372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.976 [2024-12-15 19:43:54.776384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.976 [2024-12-15 19:43:54.779022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.976 [2024-12-15 19:43:54.779053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.976 [2024-12-15 19:43:54.779064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.976 [2024-12-15 19:43:54.781559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.976 [2024-12-15 19:43:54.781590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.976 [2024-12-15 19:43:54.781601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.976 [2024-12-15 19:43:54.784522] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.976 [2024-12-15 19:43:54.784555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.976 [2024-12-15 19:43:54.784566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.976 [2024-12-15 19:43:54.787988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.976 [2024-12-15 19:43:54.788020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.976 [2024-12-15 19:43:54.788032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.976 [2024-12-15 19:43:54.791124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.976 [2024-12-15 19:43:54.791157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.976 [2024-12-15 19:43:54.791168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.976 [2024-12-15 19:43:54.794323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.976 [2024-12-15 19:43:54.794365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.976 [2024-12-15 19:43:54.794377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.976 [2024-12-15 19:43:54.797863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.976 [2024-12-15 19:43:54.797894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.976 [2024-12-15 19:43:54.797905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.976 [2024-12-15 19:43:54.800793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.976 [2024-12-15 19:43:54.800837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.976 [2024-12-15 19:43:54.800849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.976 [2024-12-15 19:43:54.804044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.976 [2024-12-15 19:43:54.804076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.976 [2024-12-15 19:43:54.804088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.976 [2024-12-15 19:43:54.807380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.976 [2024-12-15 19:43:54.807412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.976 [2024-12-15 19:43:54.807423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.976 [2024-12-15 19:43:54.810886] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.976 [2024-12-15 19:43:54.810918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.976 [2024-12-15 19:43:54.810930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.976 [2024-12-15 19:43:54.814263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.976 [2024-12-15 19:43:54.814295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.976 [2024-12-15 19:43:54.814306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.976 [2024-12-15 19:43:54.817358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.976 [2024-12-15 19:43:54.817390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.976 [2024-12-15 19:43:54.817401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.976 [2024-12-15 19:43:54.820413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.976 [2024-12-15 19:43:54.820445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.976 [2024-12-15 19:43:54.820456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.976 [2024-12-15 19:43:54.824090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.976 [2024-12-15 19:43:54.824123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.976 [2024-12-15 19:43:54.824134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.976 [2024-12-15 19:43:54.827569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.976 [2024-12-15 19:43:54.827602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.976 [2024-12-15 19:43:54.827613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.976 [2024-12-15 19:43:54.830802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.976 [2024-12-15 19:43:54.830846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.976 [2024-12-15 19:43:54.830858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.976 [2024-12-15 19:43:54.833806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.976 [2024-12-15 19:43:54.833846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.976 [2024-12-15 19:43:54.833858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.976 [2024-12-15 19:43:54.836919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.976 [2024-12-15 19:43:54.836949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.976 [2024-12-15 19:43:54.836960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.976 [2024-12-15 19:43:54.839953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.976 [2024-12-15 19:43:54.839985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.976 [2024-12-15 19:43:54.839997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.976 [2024-12-15 19:43:54.843537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.976 [2024-12-15 19:43:54.843569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.976 [2024-12-15 19:43:54.843581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.976 [2024-12-15 19:43:54.846936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.976 [2024-12-15 19:43:54.846968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.976 [2024-12-15 19:43:54.846980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.976 [2024-12-15 19:43:54.850508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.976 [2024-12-15 19:43:54.850540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.976 [2024-12-15 19:43:54.850553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.976 [2024-12-15 19:43:54.853666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.976 [2024-12-15 19:43:54.853697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.976 [2024-12-15 19:43:54.853709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.976 [2024-12-15 19:43:54.856924] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.976 [2024-12-15 19:43:54.856955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.976 [2024-12-15 19:43:54.856967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.976 [2024-12-15 19:43:54.860236] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.976 [2024-12-15 19:43:54.860267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.977 [2024-12-15 19:43:54.860279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.977 [2024-12-15 19:43:54.863380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:07.977 [2024-12-15 19:43:54.863411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.977 [2024-12-15 19:43:54.863423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.293 [2024-12-15 19:43:54.866651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.293 [2024-12-15 19:43:54.866681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.293 [2024-12-15 19:43:54.866692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.293 [2024-12-15 19:43:54.869620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.293 [2024-12-15 19:43:54.869651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.293 [2024-12-15 19:43:54.869662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.293 [2024-12-15 19:43:54.872621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.293 [2024-12-15 19:43:54.872652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.293 [2024-12-15 19:43:54.872664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.293 [2024-12-15 19:43:54.875321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.293 [2024-12-15 19:43:54.875353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.293 [2024-12-15 19:43:54.875365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.293 [2024-12-15 19:43:54.878795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.293 [2024-12-15 19:43:54.878840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.293 [2024-12-15 19:43:54.878853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.293 [2024-12-15 19:43:54.882024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.293 [2024-12-15 19:43:54.882055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.293 [2024-12-15 19:43:54.882067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.293 [2024-12-15 19:43:54.885013] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.293 [2024-12-15 19:43:54.885045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.293 [2024-12-15 19:43:54.885057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.293 [2024-12-15 19:43:54.888565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.293 [2024-12-15 19:43:54.888597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.293 [2024-12-15 19:43:54.888609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.293 [2024-12-15 19:43:54.892131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.293 [2024-12-15 19:43:54.892163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.293 [2024-12-15 19:43:54.892175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.293 [2024-12-15 19:43:54.895264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.293 [2024-12-15 19:43:54.895295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.293 [2024-12-15 19:43:54.895307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.293 [2024-12-15 19:43:54.898594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.293 [2024-12-15 19:43:54.898626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.293 [2024-12-15 19:43:54.898637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.293 [2024-12-15 19:43:54.901970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.293 [2024-12-15 19:43:54.901999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.293 [2024-12-15 19:43:54.902010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.293 [2024-12-15 19:43:54.904714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.293 [2024-12-15 19:43:54.904746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.293 [2024-12-15 19:43:54.904757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.293 [2024-12-15 19:43:54.908059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.293 [2024-12-15 19:43:54.908089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.293 [2024-12-15 19:43:54.908100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.293 [2024-12-15 19:43:54.910882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.293 [2024-12-15 19:43:54.910912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.293 [2024-12-15 19:43:54.910923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.293 [2024-12-15 19:43:54.914184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.293 [2024-12-15 19:43:54.914214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.293 [2024-12-15 19:43:54.914225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.293 [2024-12-15 19:43:54.917305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.293 [2024-12-15 19:43:54.917335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.293 [2024-12-15 19:43:54.917347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.293 [2024-12-15 19:43:54.919491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.293 [2024-12-15 19:43:54.919520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.293 [2024-12-15 19:43:54.919531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.293 [2024-12-15 19:43:54.922493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.293 [2024-12-15 19:43:54.922523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.293 [2024-12-15 19:43:54.922535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.293 [2024-12-15 19:43:54.926132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.293 [2024-12-15 19:43:54.926162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.293 [2024-12-15 19:43:54.926172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.293 [2024-12-15 19:43:54.929461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.293 [2024-12-15 19:43:54.929492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.293 [2024-12-15 19:43:54.929504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.293 [2024-12-15 19:43:54.932171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.293 [2024-12-15 19:43:54.932203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.293 [2024-12-15 19:43:54.932214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.293 [2024-12-15 19:43:54.935602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.294 [2024-12-15 19:43:54.935634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.294 [2024-12-15 19:43:54.935645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.294 [2024-12-15 19:43:54.938864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.294 [2024-12-15 19:43:54.938896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.294 [2024-12-15 19:43:54.938907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.294 [2024-12-15 19:43:54.941999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.294 [2024-12-15 19:43:54.942030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.294 [2024-12-15 19:43:54.942042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.294 [2024-12-15 19:43:54.945075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.294 [2024-12-15 19:43:54.945107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.294 [2024-12-15 19:43:54.945120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.294 [2024-12-15 19:43:54.947892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.294 [2024-12-15 19:43:54.947923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.294 [2024-12-15 19:43:54.947934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.294 [2024-12-15 19:43:54.951285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.294 [2024-12-15 19:43:54.951317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.294 [2024-12-15 19:43:54.951329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.294 [2024-12-15 19:43:54.954251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.294 [2024-12-15 19:43:54.954282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.294 [2024-12-15 19:43:54.954293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.294 [2024-12-15 19:43:54.957576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.294 [2024-12-15 19:43:54.957609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.294 [2024-12-15 19:43:54.957621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.294 [2024-12-15 19:43:54.961058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.294 [2024-12-15 19:43:54.961091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.294 [2024-12-15 19:43:54.961103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.294 [2024-12-15 19:43:54.964147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.294 [2024-12-15 19:43:54.964178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.294 [2024-12-15 19:43:54.964189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.294 [2024-12-15 19:43:54.967122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.294 [2024-12-15 19:43:54.967154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.294 [2024-12-15 19:43:54.967165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.294 [2024-12-15 19:43:54.969965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.294 [2024-12-15 19:43:54.969995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.294 [2024-12-15 19:43:54.970006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.294 [2024-12-15 19:43:54.973444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.294 [2024-12-15 19:43:54.973476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.294 [2024-12-15 19:43:54.973488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.294 [2024-12-15 19:43:54.976856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.294 [2024-12-15 19:43:54.976888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.294 [2024-12-15 19:43:54.976899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.294 [2024-12-15 19:43:54.980574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.294 [2024-12-15 19:43:54.980606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.294 [2024-12-15 19:43:54.980618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.294 [2024-12-15 19:43:54.984039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.294 [2024-12-15 19:43:54.984071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.294 [2024-12-15 19:43:54.984083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.294 [2024-12-15 19:43:54.987378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.294 [2024-12-15 19:43:54.987409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.294 [2024-12-15 19:43:54.987422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.294 [2024-12-15 19:43:54.990870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.294 [2024-12-15 19:43:54.990901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.294 [2024-12-15 19:43:54.990913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.294 [2024-12-15 19:43:54.994062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.294 [2024-12-15 19:43:54.994093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.294 [2024-12-15 19:43:54.994105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.294 [2024-12-15 19:43:54.997337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.294 [2024-12-15 19:43:54.997368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.294 [2024-12-15 19:43:54.997379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.294 [2024-12-15 19:43:55.000139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.294 [2024-12-15 19:43:55.000170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.294 [2024-12-15 19:43:55.000181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.294 [2024-12-15 19:43:55.003264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.294 [2024-12-15 19:43:55.003295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.294 [2024-12-15 19:43:55.003306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.294 [2024-12-15 19:43:55.005483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.294 [2024-12-15 19:43:55.005513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.294 [2024-12-15 19:43:55.005525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.294 [2024-12-15 19:43:55.008745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.294 [2024-12-15 19:43:55.008777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.294 [2024-12-15 19:43:55.008788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.294 [2024-12-15 19:43:55.012224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.294 [2024-12-15 19:43:55.012256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.294 [2024-12-15 19:43:55.012268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.294 [2024-12-15 19:43:55.015347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.295 [2024-12-15 19:43:55.015377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.295 [2024-12-15 19:43:55.015389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.295 [2024-12-15 19:43:55.018675] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.295 [2024-12-15 19:43:55.018707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.295 [2024-12-15 19:43:55.018719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.295 [2024-12-15 19:43:55.022363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.295 [2024-12-15 19:43:55.022395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.295 [2024-12-15 19:43:55.022407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.295 [2024-12-15 19:43:55.025622] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.295 [2024-12-15 19:43:55.025653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.295 [2024-12-15 19:43:55.025665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.295 [2024-12-15 19:43:55.029104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.295 [2024-12-15 19:43:55.029135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.295 [2024-12-15 19:43:55.029147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.295 [2024-12-15 19:43:55.032057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.295 [2024-12-15 19:43:55.032080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.295 [2024-12-15 19:43:55.032091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.295 [2024-12-15 19:43:55.035168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.295 [2024-12-15 19:43:55.035200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.295 [2024-12-15 19:43:55.035211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.295 [2024-12-15 19:43:55.038317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.295 [2024-12-15 19:43:55.038356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.295 [2024-12-15 19:43:55.038368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.295 [2024-12-15 19:43:55.041550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.295 [2024-12-15 19:43:55.041583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.295 [2024-12-15 19:43:55.041595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.295 [2024-12-15 19:43:55.044806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.295 [2024-12-15 19:43:55.044850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.295 [2024-12-15 19:43:55.044861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.295 [2024-12-15 19:43:55.048038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.295 [2024-12-15 19:43:55.048069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.295 [2024-12-15 19:43:55.048080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.295 [2024-12-15 19:43:55.050391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.295 [2024-12-15 19:43:55.050421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.295 [2024-12-15 19:43:55.050433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.295 [2024-12-15 19:43:55.053485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.295 [2024-12-15 19:43:55.053515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.295 [2024-12-15 19:43:55.053526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.295 [2024-12-15 19:43:55.057234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.295 [2024-12-15 19:43:55.057266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.295 [2024-12-15 19:43:55.057278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.295 [2024-12-15 19:43:55.060088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.295 [2024-12-15 19:43:55.060120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.295 [2024-12-15 19:43:55.060131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.295 [2024-12-15 19:43:55.063040] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.295 [2024-12-15 19:43:55.063072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.295 [2024-12-15 19:43:55.063083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.295 [2024-12-15 19:43:55.066522] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.295 [2024-12-15 19:43:55.066554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.295 [2024-12-15 19:43:55.066566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.295 [2024-12-15 19:43:55.069749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.295 [2024-12-15 19:43:55.069780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.295 [2024-12-15 19:43:55.069792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.295 [2024-12-15 19:43:55.072734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.295 [2024-12-15 19:43:55.072764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.295 [2024-12-15 19:43:55.072775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.295 [2024-12-15 19:43:55.076089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.295 [2024-12-15 19:43:55.076121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.295 [2024-12-15 19:43:55.076133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.295 [2024-12-15 19:43:55.079021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.295 [2024-12-15 19:43:55.079052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.295 [2024-12-15 19:43:55.079063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.295 [2024-12-15 19:43:55.082021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.295 [2024-12-15 19:43:55.082052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.295 [2024-12-15 19:43:55.082063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.295 [2024-12-15 19:43:55.085247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.295 [2024-12-15 19:43:55.085279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.295 [2024-12-15 19:43:55.085290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.295 [2024-12-15 19:43:55.088286] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.295 [2024-12-15 19:43:55.088318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.295 [2024-12-15 19:43:55.088330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.295 [2024-12-15 19:43:55.091346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.295 [2024-12-15 19:43:55.091378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.295 [2024-12-15 19:43:55.091390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.295 [2024-12-15 19:43:55.094512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.295 [2024-12-15 19:43:55.094545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.295 [2024-12-15 19:43:55.094556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.295 [2024-12-15 19:43:55.097879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.295 [2024-12-15 19:43:55.097910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.295 [2024-12-15 19:43:55.097922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.296 [2024-12-15 19:43:55.100763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.296 [2024-12-15 19:43:55.100793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.296 [2024-12-15 19:43:55.100804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.296 [2024-12-15 19:43:55.103996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.296 [2024-12-15 19:43:55.104027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.296 [2024-12-15 19:43:55.104038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.296 [2024-12-15 19:43:55.107478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.296 [2024-12-15 19:43:55.107508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.296 [2024-12-15 19:43:55.107519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.296 [2024-12-15 19:43:55.110758] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.296 [2024-12-15 19:43:55.110789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.296 [2024-12-15 19:43:55.110800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.296 [2024-12-15 19:43:55.114375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.296 [2024-12-15 19:43:55.114407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.296 [2024-12-15 19:43:55.114418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.296 [2024-12-15 19:43:55.117052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.296 [2024-12-15 19:43:55.117083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.296 [2024-12-15 19:43:55.117094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.296 [2024-12-15 19:43:55.120120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.296 [2024-12-15 19:43:55.120152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.296 [2024-12-15 19:43:55.120163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.296 [2024-12-15 19:43:55.123012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.296 [2024-12-15 19:43:55.123044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.296 [2024-12-15 19:43:55.123055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.296 [2024-12-15 19:43:55.126380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.296 [2024-12-15 19:43:55.126411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.296 [2024-12-15 19:43:55.126423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.296 [2024-12-15 19:43:55.129525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.296 [2024-12-15 19:43:55.129556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.296 [2024-12-15 19:43:55.129567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.296 [2024-12-15 19:43:55.132556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.296 [2024-12-15 19:43:55.132588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.296 [2024-12-15 19:43:55.132600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.296 [2024-12-15 19:43:55.136141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.296 [2024-12-15 19:43:55.136173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.296 [2024-12-15 19:43:55.136184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.296 [2024-12-15 19:43:55.140036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.296 [2024-12-15 19:43:55.140068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.296 [2024-12-15 19:43:55.140079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.296 [2024-12-15 19:43:55.143049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.296 [2024-12-15 19:43:55.143080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.296 [2024-12-15 19:43:55.143092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.296 [2024-12-15 19:43:55.146726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.296 [2024-12-15 19:43:55.146758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.296 [2024-12-15 19:43:55.146770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.296 [2024-12-15 19:43:55.150128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.296 [2024-12-15 19:43:55.150160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.296 [2024-12-15 19:43:55.150171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.296 [2024-12-15 19:43:55.153470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.296 [2024-12-15 19:43:55.153502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.296 [2024-12-15 19:43:55.153514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.296 [2024-12-15 19:43:55.157076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.296 [2024-12-15 19:43:55.157107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.296 [2024-12-15 19:43:55.157120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.296 [2024-12-15 19:43:55.159765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.296 [2024-12-15 19:43:55.159797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.296 [2024-12-15 19:43:55.159809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.296 [2024-12-15 19:43:55.163162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.296 [2024-12-15 19:43:55.163195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.296 [2024-12-15 19:43:55.163206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.296 [2024-12-15 19:43:55.166200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.296 [2024-12-15 19:43:55.166231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.296 [2024-12-15 19:43:55.166242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.296 [2024-12-15 19:43:55.169282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.296 [2024-12-15 19:43:55.169315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.296 [2024-12-15 19:43:55.169326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.296 [2024-12-15 19:43:55.172868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.296 [2024-12-15 19:43:55.172900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.296 [2024-12-15 19:43:55.172911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.296 [2024-12-15 19:43:55.176664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.296 [2024-12-15 19:43:55.176697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.296 [2024-12-15 19:43:55.176708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.296 [2024-12-15 19:43:55.179705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.296 [2024-12-15 19:43:55.179738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.296 [2024-12-15 19:43:55.179749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.296 [2024-12-15 19:43:55.183331] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.296 [2024-12-15 19:43:55.183364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.296 [2024-12-15 19:43:55.183375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.556 [2024-12-15 19:43:55.186679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.556 [2024-12-15 19:43:55.186712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.556 [2024-12-15 19:43:55.186723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.557 [2024-12-15 19:43:55.190570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.557 [2024-12-15 19:43:55.190602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.557 [2024-12-15 19:43:55.190613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.557 [2024-12-15 19:43:55.193917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.557 [2024-12-15 19:43:55.193948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.557 [2024-12-15 19:43:55.193959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.557 [2024-12-15 19:43:55.197155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.557 [2024-12-15 19:43:55.197186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.557 [2024-12-15 19:43:55.197197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.557 [2024-12-15 19:43:55.199727] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.557 [2024-12-15 19:43:55.199758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.557 [2024-12-15 19:43:55.199770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.557 [2024-12-15 19:43:55.202597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.557 [2024-12-15 19:43:55.202629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.557 [2024-12-15 19:43:55.202641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.557 [2024-12-15 19:43:55.205538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.557 [2024-12-15 19:43:55.205569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.557 [2024-12-15 19:43:55.205580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.557 [2024-12-15 19:43:55.208911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.557 [2024-12-15 19:43:55.208943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.557 [2024-12-15 19:43:55.208954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.557 [2024-12-15 19:43:55.211919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.557 [2024-12-15 19:43:55.211949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.557 [2024-12-15 19:43:55.211960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.557 [2024-12-15 19:43:55.215637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.557 [2024-12-15 19:43:55.215670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.557 [2024-12-15 19:43:55.215681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.557 [2024-12-15 19:43:55.218222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.557 [2024-12-15 19:43:55.218252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.557 [2024-12-15 19:43:55.218263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.557 [2024-12-15 19:43:55.221568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.557 [2024-12-15 19:43:55.221599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.557 [2024-12-15 19:43:55.221610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.557 [2024-12-15 19:43:55.224772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.557 [2024-12-15 19:43:55.224804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.557 [2024-12-15 19:43:55.224827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.557 [2024-12-15 19:43:55.228411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.557 [2024-12-15 19:43:55.228442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.557 [2024-12-15 19:43:55.228454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.557 [2024-12-15 19:43:55.232139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.557 [2024-12-15 19:43:55.232171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.557 [2024-12-15 19:43:55.232183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.557 [2024-12-15 19:43:55.235319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.557 [2024-12-15 19:43:55.235351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.557 [2024-12-15 19:43:55.235363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.557 [2024-12-15 19:43:55.238511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.557 [2024-12-15 19:43:55.238542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.557 [2024-12-15 19:43:55.238552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.557 [2024-12-15 19:43:55.242044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.557 [2024-12-15 19:43:55.242075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.557 [2024-12-15 19:43:55.242087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.557 [2024-12-15 19:43:55.245115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.557 [2024-12-15 19:43:55.245145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.557 [2024-12-15 19:43:55.245156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.557 [2024-12-15 19:43:55.247996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.557 [2024-12-15 19:43:55.248027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.557 [2024-12-15 19:43:55.248038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.557 [2024-12-15 19:43:55.250907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.557 [2024-12-15 19:43:55.250938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.557 [2024-12-15 19:43:55.250949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.557 [2024-12-15 19:43:55.254317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.557 [2024-12-15 19:43:55.254357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.557 [2024-12-15 19:43:55.254369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.557 [2024-12-15 19:43:55.257417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.557 [2024-12-15 19:43:55.257448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.557 [2024-12-15 19:43:55.257459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.557 [2024-12-15 19:43:55.260614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.557 [2024-12-15 19:43:55.260646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.557 [2024-12-15 19:43:55.260657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.557 [2024-12-15 19:43:55.263998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.557 [2024-12-15 19:43:55.264030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.557 [2024-12-15 19:43:55.264041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.557 [2024-12-15 19:43:55.266941] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.557 [2024-12-15 19:43:55.266972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.557 [2024-12-15 19:43:55.266983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.557 [2024-12-15 19:43:55.270376] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.557 [2024-12-15 19:43:55.270408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.557 [2024-12-15 19:43:55.270420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.557 [2024-12-15 19:43:55.273109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.557 [2024-12-15 19:43:55.273140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.557 [2024-12-15 19:43:55.273151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.557 [2024-12-15 19:43:55.276358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.557 [2024-12-15 19:43:55.276391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.557 [2024-12-15 19:43:55.276402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.557 [2024-12-15 19:43:55.279734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.557 [2024-12-15 19:43:55.279765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.557 [2024-12-15 19:43:55.279776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.557 [2024-12-15 19:43:55.283024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.557 [2024-12-15 19:43:55.283055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.557 [2024-12-15 19:43:55.283066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.557 [2024-12-15 19:43:55.286390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.557 [2024-12-15 19:43:55.286422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.557 [2024-12-15 19:43:55.286433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.557 [2024-12-15 19:43:55.290053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.557 [2024-12-15 19:43:55.290084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.557 [2024-12-15 19:43:55.290095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.557 [2024-12-15 19:43:55.293464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.557 [2024-12-15 19:43:55.293496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.557 [2024-12-15 19:43:55.293507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.557 [2024-12-15 19:43:55.296236] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.557 [2024-12-15 19:43:55.296267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.557 [2024-12-15 19:43:55.296279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.557 [2024-12-15 19:43:55.299799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.557 [2024-12-15 19:43:55.299844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.557 [2024-12-15 19:43:55.299857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.557 [2024-12-15 19:43:55.302701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.557 [2024-12-15 19:43:55.302733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.557 [2024-12-15 19:43:55.302744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.557 [2024-12-15 19:43:55.305796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.557 [2024-12-15 19:43:55.305837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.557 [2024-12-15 19:43:55.305850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.557 [2024-12-15 19:43:55.309196] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.557 [2024-12-15 19:43:55.309227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.557 [2024-12-15 19:43:55.309238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.558 [2024-12-15 19:43:55.312797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.558 [2024-12-15 19:43:55.312837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.558 [2024-12-15 19:43:55.312849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.558 [2024-12-15 19:43:55.316356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.558 [2024-12-15 19:43:55.316387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.558 [2024-12-15 19:43:55.316399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.558 [2024-12-15 19:43:55.319625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.558 [2024-12-15 19:43:55.319657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.558 [2024-12-15 19:43:55.319668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.558 [2024-12-15 19:43:55.322651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.558 [2024-12-15 19:43:55.322681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.558 [2024-12-15 19:43:55.322693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.558 [2024-12-15 19:43:55.325895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.558 [2024-12-15 19:43:55.325925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.558 [2024-12-15 19:43:55.325937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.558 [2024-12-15 19:43:55.329157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.558 [2024-12-15 19:43:55.329187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.558 [2024-12-15 19:43:55.329199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.558 [2024-12-15 19:43:55.332724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.558 [2024-12-15 19:43:55.332755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.558 [2024-12-15 19:43:55.332767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.558 [2024-12-15 19:43:55.335565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.558 [2024-12-15 19:43:55.335595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.558 [2024-12-15 19:43:55.335607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.558 [2024-12-15 19:43:55.338573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.558 [2024-12-15 19:43:55.338605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.558 [2024-12-15 19:43:55.338616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.558 [2024-12-15 19:43:55.342124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.558 [2024-12-15 19:43:55.342155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.558 [2024-12-15 19:43:55.342167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.558 [2024-12-15 19:43:55.345601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.558 [2024-12-15 19:43:55.345633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.558 [2024-12-15 19:43:55.345644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.558 [2024-12-15 19:43:55.348637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.558 [2024-12-15 19:43:55.348669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.558 [2024-12-15 19:43:55.348680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.558 [2024-12-15 19:43:55.351525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.558 [2024-12-15 19:43:55.351556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.558 [2024-12-15 19:43:55.351567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.558 [2024-12-15 19:43:55.354453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.558 [2024-12-15 19:43:55.354484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.558 [2024-12-15 19:43:55.354495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.558 [2024-12-15 19:43:55.357657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.558 [2024-12-15 19:43:55.357687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.558 [2024-12-15 19:43:55.357698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.558 [2024-12-15 19:43:55.361592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.558 [2024-12-15 19:43:55.361623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.558 [2024-12-15 19:43:55.361635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.558 [2024-12-15 19:43:55.364505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.558 [2024-12-15 19:43:55.364536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.558 [2024-12-15 19:43:55.364547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.558 [2024-12-15 19:43:55.367341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.558 [2024-12-15 19:43:55.367372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.558 [2024-12-15 19:43:55.367384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.558 [2024-12-15 19:43:55.370610] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.558 [2024-12-15 19:43:55.370641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.558 [2024-12-15 19:43:55.370653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.558 [2024-12-15 19:43:55.373573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.558 [2024-12-15 19:43:55.373603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.558 [2024-12-15 19:43:55.373614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.558 [2024-12-15 19:43:55.377487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.558 [2024-12-15 19:43:55.377520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.558 [2024-12-15 19:43:55.377532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.558 [2024-12-15 19:43:55.381041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.558 [2024-12-15 19:43:55.381072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.558 [2024-12-15 19:43:55.381084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.558 [2024-12-15 19:43:55.384552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.558 [2024-12-15 19:43:55.384584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.558 [2024-12-15 19:43:55.384596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.558 [2024-12-15 19:43:55.387787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.558 [2024-12-15 19:43:55.387829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.558 [2024-12-15 19:43:55.387841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.558 [2024-12-15 19:43:55.391063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.558 [2024-12-15 19:43:55.391094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.558 [2024-12-15 19:43:55.391104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.558 [2024-12-15 19:43:55.394066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.558 [2024-12-15 19:43:55.394096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.558 [2024-12-15 19:43:55.394107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.558 [2024-12-15 19:43:55.397440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.558 [2024-12-15 19:43:55.397472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.558 [2024-12-15 19:43:55.397483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.558 [2024-12-15 19:43:55.400234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.558 [2024-12-15 19:43:55.400264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.559 [2024-12-15 19:43:55.400276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.559 [2024-12-15 19:43:55.403844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.559 [2024-12-15 19:43:55.403874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.559 [2024-12-15 19:43:55.403885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.559 [2024-12-15 19:43:55.407241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.559 [2024-12-15 19:43:55.407271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.559 [2024-12-15 19:43:55.407283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.559 [2024-12-15 19:43:55.410218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.559 [2024-12-15 19:43:55.410249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.559 [2024-12-15 19:43:55.410260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.559 [2024-12-15 19:43:55.413478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.559 [2024-12-15 19:43:55.413509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.559 [2024-12-15 19:43:55.413521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.559 [2024-12-15 19:43:55.416431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.559 [2024-12-15 19:43:55.416461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.559 [2024-12-15 19:43:55.416472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.559 [2024-12-15 19:43:55.419748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.559 [2024-12-15 19:43:55.419779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.559 [2024-12-15 19:43:55.419790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.559 [2024-12-15 19:43:55.423149] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.559 [2024-12-15 19:43:55.423180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.559 [2024-12-15 19:43:55.423191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.559 [2024-12-15 19:43:55.426315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.559 [2024-12-15 19:43:55.426352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.559 [2024-12-15 19:43:55.426363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.559 [2024-12-15 19:43:55.429729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.559 [2024-12-15 19:43:55.429761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.559 [2024-12-15 19:43:55.429772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.559 [2024-12-15 19:43:55.432566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.559 [2024-12-15 19:43:55.432597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.559 [2024-12-15 19:43:55.432608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.559 [2024-12-15 19:43:55.435910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.559 [2024-12-15 19:43:55.435940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.559 [2024-12-15 19:43:55.435951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.559 [2024-12-15 19:43:55.438690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.559 [2024-12-15 19:43:55.438721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.559 [2024-12-15 19:43:55.438733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.559 [2024-12-15 19:43:55.441855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.559 [2024-12-15 19:43:55.441885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.559 [2024-12-15 19:43:55.441896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.559 [2024-12-15 19:43:55.444502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.559 [2024-12-15 19:43:55.444532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.559 [2024-12-15 19:43:55.444543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.559 [2024-12-15 19:43:55.447437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.559 [2024-12-15 19:43:55.447468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.559 [2024-12-15 19:43:55.447480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.820 [2024-12-15 19:43:55.450211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.820 [2024-12-15 19:43:55.450241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.820 [2024-12-15 19:43:55.450252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.820 [2024-12-15 19:43:55.453799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.820 [2024-12-15 19:43:55.453839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.820 [2024-12-15 19:43:55.453851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.820 [2024-12-15 19:43:55.457348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.820 [2024-12-15 19:43:55.457379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.820 [2024-12-15 19:43:55.457390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.820 [2024-12-15 19:43:55.460318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.820 [2024-12-15 19:43:55.460349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.820 [2024-12-15 19:43:55.460360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.820 [2024-12-15 19:43:55.463520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.820 [2024-12-15 19:43:55.463551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.820 [2024-12-15 19:43:55.463562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.820 [2024-12-15 19:43:55.467194] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.820 [2024-12-15 19:43:55.467226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.820 [2024-12-15 19:43:55.467237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.820 [2024-12-15 19:43:55.470244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.820 [2024-12-15 19:43:55.470275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.820 [2024-12-15 19:43:55.470286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.820 [2024-12-15 19:43:55.473569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.820 [2024-12-15 19:43:55.473600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.820 [2024-12-15 19:43:55.473611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.820 [2024-12-15 19:43:55.477503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.820 [2024-12-15 19:43:55.477534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.820 [2024-12-15 19:43:55.477546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.820 [2024-12-15 19:43:55.480852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.820 [2024-12-15 19:43:55.480884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.820 [2024-12-15 19:43:55.480896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.820 [2024-12-15 19:43:55.484085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.820 [2024-12-15 19:43:55.484115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.820 [2024-12-15 19:43:55.484127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.820 [2024-12-15 19:43:55.487237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.820 [2024-12-15 19:43:55.487267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.820 [2024-12-15 19:43:55.487279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.820 [2024-12-15 19:43:55.490206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.820 [2024-12-15 19:43:55.490237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.820 [2024-12-15 19:43:55.490248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.820 [2024-12-15 19:43:55.493407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.820 [2024-12-15 19:43:55.493438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.820 [2024-12-15 19:43:55.493449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.820 [2024-12-15 19:43:55.496076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.820 [2024-12-15 19:43:55.496106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.820 [2024-12-15 19:43:55.496118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.820 [2024-12-15 19:43:55.498692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.820 [2024-12-15 19:43:55.498723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.820 [2024-12-15 19:43:55.498735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.820 [2024-12-15 19:43:55.502215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.820 [2024-12-15 19:43:55.502245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.820 [2024-12-15 19:43:55.502257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.820 [2024-12-15 19:43:55.505214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.820 [2024-12-15 19:43:55.505244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.820 [2024-12-15 19:43:55.505255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.820 [2024-12-15 19:43:55.508723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.820 [2024-12-15 19:43:55.508754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.820 [2024-12-15 19:43:55.508766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.820 [2024-12-15 19:43:55.511885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.820 [2024-12-15 19:43:55.511915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.820 [2024-12-15 19:43:55.511928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.820 [2024-12-15 19:43:55.515133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.820 [2024-12-15 19:43:55.515164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.820 [2024-12-15 19:43:55.515175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.820 [2024-12-15 19:43:55.518271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.820 [2024-12-15 19:43:55.518301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.820 [2024-12-15 19:43:55.518313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.820 [2024-12-15 19:43:55.521506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.820 [2024-12-15 19:43:55.521537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.820 [2024-12-15 19:43:55.521549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.820 [2024-12-15 19:43:55.524183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.820 [2024-12-15 19:43:55.524214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.820 [2024-12-15 19:43:55.524225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.820 [2024-12-15 19:43:55.527668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.820 [2024-12-15 19:43:55.527700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.820 [2024-12-15 19:43:55.527711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.820 [2024-12-15 19:43:55.531334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.820 [2024-12-15 19:43:55.531365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.820 [2024-12-15 19:43:55.531376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.820 [2024-12-15 19:43:55.534780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.820 [2024-12-15 19:43:55.534811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.820 [2024-12-15 19:43:55.534837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.820 [2024-12-15 19:43:55.537851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.820 [2024-12-15 19:43:55.537879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.820 [2024-12-15 19:43:55.537890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.820 [2024-12-15 19:43:55.540670] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.820 [2024-12-15 19:43:55.540701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.820 [2024-12-15 19:43:55.540713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.820 [2024-12-15 19:43:55.543915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.820 [2024-12-15 19:43:55.543946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.821 [2024-12-15 19:43:55.543958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.821 [2024-12-15 19:43:55.547367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.821 [2024-12-15 19:43:55.547399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.821 [2024-12-15 19:43:55.547410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.821 [2024-12-15 19:43:55.550702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.821 [2024-12-15 19:43:55.550734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.821 [2024-12-15 19:43:55.550746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.821 [2024-12-15 19:43:55.554053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.821 [2024-12-15 19:43:55.554082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.821 [2024-12-15 19:43:55.554094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.821 [2024-12-15 19:43:55.556967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.821 [2024-12-15 19:43:55.556998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.821 [2024-12-15 19:43:55.557011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.821 [2024-12-15 19:43:55.560132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.821 [2024-12-15 19:43:55.560164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.821 [2024-12-15 19:43:55.560176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.821 [2024-12-15 19:43:55.563099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.821 [2024-12-15 19:43:55.563131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.821 [2024-12-15 19:43:55.563142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.821 [2024-12-15 19:43:55.566615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.821 [2024-12-15 19:43:55.566645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.821 [2024-12-15 19:43:55.566657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.821 [2024-12-15 19:43:55.569614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.821 [2024-12-15 19:43:55.569644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.821 [2024-12-15 19:43:55.569655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.821 [2024-12-15 19:43:55.572931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.821 [2024-12-15 19:43:55.572962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.821 [2024-12-15 19:43:55.572973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.821 [2024-12-15 19:43:55.575962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.821 [2024-12-15 19:43:55.575993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.821 [2024-12-15 19:43:55.576005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.821 [2024-12-15 19:43:55.579385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.821 [2024-12-15 19:43:55.579416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.821 [2024-12-15 19:43:55.579428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.821 [2024-12-15 19:43:55.582230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.821 [2024-12-15 19:43:55.582261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.821 [2024-12-15 19:43:55.582272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.821 [2024-12-15 19:43:55.585383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.821 [2024-12-15 19:43:55.585413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.821 [2024-12-15 19:43:55.585424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.821 [2024-12-15 19:43:55.588467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.821 [2024-12-15 19:43:55.588497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.821 [2024-12-15 19:43:55.588509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.821 [2024-12-15 19:43:55.591754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.821 [2024-12-15 19:43:55.591784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.821 [2024-12-15 19:43:55.591795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.821 [2024-12-15 19:43:55.594623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.821 [2024-12-15 19:43:55.594654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.821 [2024-12-15 19:43:55.594665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.821 [2024-12-15 19:43:55.597811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.821 [2024-12-15 19:43:55.597852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.821 [2024-12-15 19:43:55.597863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.821 [2024-12-15 19:43:55.600860] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.821 [2024-12-15 19:43:55.600889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.821 [2024-12-15 19:43:55.600900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.821 [2024-12-15 19:43:55.604144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.821 [2024-12-15 19:43:55.604174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.821 [2024-12-15 19:43:55.604185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.821 [2024-12-15 19:43:55.607188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.821 [2024-12-15 19:43:55.607219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.821 [2024-12-15 19:43:55.607230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.821 [2024-12-15 19:43:55.610624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.821 [2024-12-15 19:43:55.610655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.821 [2024-12-15 19:43:55.610666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.821 [2024-12-15 19:43:55.613677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.821 [2024-12-15 19:43:55.613707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.821 [2024-12-15 19:43:55.613718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.821 [2024-12-15 19:43:55.617089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.821 [2024-12-15 19:43:55.617121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.821 [2024-12-15 19:43:55.617133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.821 [2024-12-15 19:43:55.620646] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.821 [2024-12-15 19:43:55.620677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.821 [2024-12-15 19:43:55.620689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.821 [2024-12-15 19:43:55.623804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.821 [2024-12-15 19:43:55.623846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.821 [2024-12-15 19:43:55.623858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.821 [2024-12-15 19:43:55.626777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.821 [2024-12-15 19:43:55.626809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.821 [2024-12-15 19:43:55.626833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.821 [2024-12-15 19:43:55.630207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.821 [2024-12-15 19:43:55.630238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.821 [2024-12-15 19:43:55.630249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.821 [2024-12-15 19:43:55.633327] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.822 [2024-12-15 19:43:55.633359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.822 [2024-12-15 19:43:55.633370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.822 [2024-12-15 19:43:55.636568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.822 [2024-12-15 19:43:55.636600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.822 [2024-12-15 19:43:55.636611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.822 [2024-12-15 19:43:55.639426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.822 [2024-12-15 19:43:55.639457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.822 [2024-12-15 19:43:55.639468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.822 [2024-12-15 19:43:55.642982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.822 [2024-12-15 19:43:55.643013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.822 [2024-12-15 19:43:55.643024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.822 [2024-12-15 19:43:55.646266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.822 [2024-12-15 19:43:55.646297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.822 [2024-12-15 19:43:55.646308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.822 [2024-12-15 19:43:55.649326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.822 [2024-12-15 19:43:55.649357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.822 [2024-12-15 19:43:55.649368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.822 [2024-12-15 19:43:55.652627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.822 [2024-12-15 19:43:55.652658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.822 [2024-12-15 19:43:55.652670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.822 [2024-12-15 19:43:55.656423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.822 [2024-12-15 19:43:55.656455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.822 [2024-12-15 19:43:55.656466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.822 [2024-12-15 19:43:55.659687] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.822 [2024-12-15 19:43:55.659718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.822 [2024-12-15 19:43:55.659730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.822 [2024-12-15 19:43:55.662742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.822 [2024-12-15 19:43:55.662772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.822 [2024-12-15 19:43:55.662784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.822 [2024-12-15 19:43:55.666066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.822 [2024-12-15 19:43:55.666097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.822 [2024-12-15 19:43:55.666108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.822 [2024-12-15 19:43:55.669056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.822 [2024-12-15 19:43:55.669087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.822 [2024-12-15 19:43:55.669098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.822 [2024-12-15 19:43:55.672381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.822 [2024-12-15 19:43:55.672413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.822 [2024-12-15 19:43:55.672424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.822 [2024-12-15 19:43:55.675582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.822 [2024-12-15 19:43:55.675613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.822 [2024-12-15 19:43:55.675625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.822 [2024-12-15 19:43:55.679083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.822 [2024-12-15 19:43:55.679115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.822 [2024-12-15 19:43:55.679126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.822 [2024-12-15 19:43:55.682704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.822 [2024-12-15 19:43:55.682735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.822 [2024-12-15 19:43:55.682747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.822 [2024-12-15 19:43:55.686384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.822 [2024-12-15 19:43:55.686416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.822 [2024-12-15 19:43:55.686427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.822 [2024-12-15 19:43:55.690017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.822 [2024-12-15 19:43:55.690048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.822 [2024-12-15 19:43:55.690060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.822 [2024-12-15 19:43:55.693268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.822 [2024-12-15 19:43:55.693299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.822 [2024-12-15 19:43:55.693310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.822 [2024-12-15 19:43:55.695424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.822 [2024-12-15 19:43:55.695454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.822 [2024-12-15 19:43:55.695465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.822 [2024-12-15 19:43:55.698938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.822 [2024-12-15 19:43:55.698969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.822 [2024-12-15 19:43:55.698980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.822 [2024-12-15 19:43:55.702383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.822 [2024-12-15 19:43:55.702414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.822 [2024-12-15 19:43:55.702425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.822 [2024-12-15 19:43:55.705721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.822 [2024-12-15 19:43:55.705752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.822 [2024-12-15 19:43:55.705763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.822 [2024-12-15 19:43:55.708930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.822 [2024-12-15 19:43:55.708962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.822 [2024-12-15 19:43:55.708973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.822 [2024-12-15 19:43:55.711970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:08.822 [2024-12-15 19:43:55.712001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.822 [2024-12-15 19:43:55.712012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.090 [2024-12-15 19:43:55.714907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.090 [2024-12-15 19:43:55.714937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.090 [2024-12-15 19:43:55.714948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.090 [2024-12-15 19:43:55.718616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.090 [2024-12-15 19:43:55.718648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.090 [2024-12-15 19:43:55.718660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.090 [2024-12-15 19:43:55.721391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.090 [2024-12-15 19:43:55.721422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.090 [2024-12-15 19:43:55.721433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.090 [2024-12-15 19:43:55.724279] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.090 [2024-12-15 19:43:55.724310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.090 [2024-12-15 19:43:55.724321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.090 [2024-12-15 19:43:55.727585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.090 [2024-12-15 19:43:55.727616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.090 [2024-12-15 19:43:55.727627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.090 [2024-12-15 19:43:55.730627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.090 [2024-12-15 19:43:55.730658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.090 [2024-12-15 19:43:55.730670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.090 [2024-12-15 19:43:55.734014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.090 [2024-12-15 19:43:55.734045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.090 [2024-12-15 19:43:55.734056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.090 [2024-12-15 19:43:55.737369] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.090 [2024-12-15 19:43:55.737401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.090 [2024-12-15 19:43:55.737412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.090 [2024-12-15 19:43:55.740172] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.090 [2024-12-15 19:43:55.740202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.090 [2024-12-15 19:43:55.740213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.090 [2024-12-15 19:43:55.743574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.090 [2024-12-15 19:43:55.743606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.090 [2024-12-15 19:43:55.743618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.090 [2024-12-15 19:43:55.746585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.090 [2024-12-15 19:43:55.746617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.090 [2024-12-15 19:43:55.746629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.090 [2024-12-15 19:43:55.749998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.090 [2024-12-15 19:43:55.750028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.090 [2024-12-15 19:43:55.750039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.090 [2024-12-15 19:43:55.753372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.090 [2024-12-15 19:43:55.753403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.090 [2024-12-15 19:43:55.753414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.090 [2024-12-15 19:43:55.757016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.090 [2024-12-15 19:43:55.757047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.090 [2024-12-15 19:43:55.757058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.090 [2024-12-15 19:43:55.760709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.091 [2024-12-15 19:43:55.760742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.091 [2024-12-15 19:43:55.760753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.091 [2024-12-15 19:43:55.763434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.091 [2024-12-15 19:43:55.763465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.091 [2024-12-15 19:43:55.763477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.091 [2024-12-15 19:43:55.766728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.091 [2024-12-15 19:43:55.766760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.091 [2024-12-15 19:43:55.766772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.091 [2024-12-15 19:43:55.769875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.091 [2024-12-15 19:43:55.769904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.091 [2024-12-15 19:43:55.769915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.091 [2024-12-15 19:43:55.773488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.091 [2024-12-15 19:43:55.773519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.091 [2024-12-15 19:43:55.773531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.091 [2024-12-15 19:43:55.777275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.091 [2024-12-15 19:43:55.777306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.091 [2024-12-15 19:43:55.777318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.091 [2024-12-15 19:43:55.781395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.091 [2024-12-15 19:43:55.781426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.091 [2024-12-15 19:43:55.781438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.091 [2024-12-15 19:43:55.784554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.091 [2024-12-15 19:43:55.784585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.091 [2024-12-15 19:43:55.784596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.091 [2024-12-15 19:43:55.788032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.091 [2024-12-15 19:43:55.788063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.091 [2024-12-15 19:43:55.788075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.091 [2024-12-15 19:43:55.791378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.091 [2024-12-15 19:43:55.791409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.091 [2024-12-15 19:43:55.791421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.091 [2024-12-15 19:43:55.794907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.091 [2024-12-15 19:43:55.794938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.091 [2024-12-15 19:43:55.794950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.091 [2024-12-15 19:43:55.797867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.091 [2024-12-15 19:43:55.797897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.091 [2024-12-15 19:43:55.797908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.091 [2024-12-15 19:43:55.801036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.091 [2024-12-15 19:43:55.801067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.091 [2024-12-15 19:43:55.801078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.091 [2024-12-15 19:43:55.804177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.091 [2024-12-15 19:43:55.804207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.091 [2024-12-15 19:43:55.804219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.091 [2024-12-15 19:43:55.807379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.091 [2024-12-15 19:43:55.807411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.091 [2024-12-15 19:43:55.807422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.091 [2024-12-15 19:43:55.810693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.091 [2024-12-15 19:43:55.810724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.091 [2024-12-15 19:43:55.810736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.091 [2024-12-15 19:43:55.813585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.091 [2024-12-15 19:43:55.813616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.091 [2024-12-15 19:43:55.813627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.091 [2024-12-15 19:43:55.816862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.091 [2024-12-15 19:43:55.816895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.091 [2024-12-15 19:43:55.816906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.091 [2024-12-15 19:43:55.820803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.091 [2024-12-15 19:43:55.820848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.091 [2024-12-15 19:43:55.820860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.091 [2024-12-15 19:43:55.823987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.091 [2024-12-15 19:43:55.824019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.091 [2024-12-15 19:43:55.824030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.091 [2024-12-15 19:43:55.827631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.091 [2024-12-15 19:43:55.827663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.091 [2024-12-15 19:43:55.827674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.091 [2024-12-15 19:43:55.831073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.091 [2024-12-15 19:43:55.831104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.091 [2024-12-15 19:43:55.831115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.091 [2024-12-15 19:43:55.835007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.091 [2024-12-15 19:43:55.835038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.091 [2024-12-15 19:43:55.835050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.091 [2024-12-15 19:43:55.838453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.091 [2024-12-15 19:43:55.838484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.091 [2024-12-15 19:43:55.838495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.091 [2024-12-15 19:43:55.841714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.091 [2024-12-15 19:43:55.841745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.091 [2024-12-15 19:43:55.841757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.091 [2024-12-15 19:43:55.844878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.091 [2024-12-15 19:43:55.844908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.091 [2024-12-15 19:43:55.844919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.091 [2024-12-15 19:43:55.848167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.091 [2024-12-15 19:43:55.848198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.091 [2024-12-15 19:43:55.848209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.091 [2024-12-15 19:43:55.851300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.091 [2024-12-15 19:43:55.851329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.091 [2024-12-15 19:43:55.851340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.092 [2024-12-15 19:43:55.853511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.092 [2024-12-15 19:43:55.853540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.092 [2024-12-15 19:43:55.853552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.092 [2024-12-15 19:43:55.856567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.092 [2024-12-15 19:43:55.856598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.092 [2024-12-15 19:43:55.856609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.092 [2024-12-15 19:43:55.859741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.092 [2024-12-15 19:43:55.859771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.092 [2024-12-15 19:43:55.859782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.092 [2024-12-15 19:43:55.862636] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.092 [2024-12-15 19:43:55.862668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.092 [2024-12-15 19:43:55.862680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.092 [2024-12-15 19:43:55.865596] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.092 [2024-12-15 19:43:55.865626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.092 [2024-12-15 19:43:55.865637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.092 [2024-12-15 19:43:55.868958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.092 [2024-12-15 19:43:55.868989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.092 [2024-12-15 19:43:55.869000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.092 [2024-12-15 19:43:55.872072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.092 [2024-12-15 19:43:55.872103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.092 [2024-12-15 19:43:55.872114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.092 [2024-12-15 19:43:55.875615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.092 [2024-12-15 19:43:55.875647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.092 [2024-12-15 19:43:55.875659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.092 [2024-12-15 19:43:55.879078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.092 [2024-12-15 19:43:55.879109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.092 [2024-12-15 19:43:55.879120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.092 [2024-12-15 19:43:55.882293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.092 [2024-12-15 19:43:55.882324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.092 [2024-12-15 19:43:55.882344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.092 [2024-12-15 19:43:55.885555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.092 [2024-12-15 19:43:55.885586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.092 [2024-12-15 19:43:55.885597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.092 [2024-12-15 19:43:55.888749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.092 [2024-12-15 19:43:55.888781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.092 [2024-12-15 19:43:55.888792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.092 [2024-12-15 19:43:55.892170] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.092 [2024-12-15 19:43:55.892201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.092 [2024-12-15 19:43:55.892213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.092 [2024-12-15 19:43:55.894904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.092 [2024-12-15 19:43:55.894933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.092 [2024-12-15 19:43:55.894944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.092 [2024-12-15 19:43:55.897444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.092 [2024-12-15 19:43:55.897475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.092 [2024-12-15 19:43:55.897486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.092 [2024-12-15 19:43:55.900912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.092 [2024-12-15 19:43:55.900944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.092 [2024-12-15 19:43:55.900955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.092 [2024-12-15 19:43:55.904209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.092 [2024-12-15 19:43:55.904240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.092 [2024-12-15 19:43:55.904252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.092 [2024-12-15 19:43:55.907692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.092 [2024-12-15 19:43:55.907724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.092 [2024-12-15 19:43:55.907736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.092 [2024-12-15 19:43:55.910673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.092 [2024-12-15 19:43:55.910704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.092 [2024-12-15 19:43:55.910717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.092 [2024-12-15 19:43:55.913966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.092 [2024-12-15 19:43:55.913995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.092 [2024-12-15 19:43:55.914007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.092 [2024-12-15 19:43:55.917197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.092 [2024-12-15 19:43:55.917229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.092 [2024-12-15 19:43:55.917240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.092 [2024-12-15 19:43:55.919987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.092 [2024-12-15 19:43:55.920018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.092 [2024-12-15 19:43:55.920029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.092 [2024-12-15 19:43:55.923546] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.092 [2024-12-15 19:43:55.923578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.092 [2024-12-15 19:43:55.923589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.092 [2024-12-15 19:43:55.926722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.092 [2024-12-15 19:43:55.926752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.092 [2024-12-15 19:43:55.926764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.092 [2024-12-15 19:43:55.929694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.092 [2024-12-15 19:43:55.929723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.092 [2024-12-15 19:43:55.929734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.092 [2024-12-15 19:43:55.933217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.092 [2024-12-15 19:43:55.933248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.092 [2024-12-15 19:43:55.933259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.092 [2024-12-15 19:43:55.935981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.092 [2024-12-15 19:43:55.936012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.092 [2024-12-15 19:43:55.936023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.092 [2024-12-15 19:43:55.939110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.092 [2024-12-15 19:43:55.939141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.093 [2024-12-15 19:43:55.939153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.093 [2024-12-15 19:43:55.941731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.093 [2024-12-15 19:43:55.941761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.093 [2024-12-15 19:43:55.941772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.093 [2024-12-15 19:43:55.944948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.093 [2024-12-15 19:43:55.944978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.093 [2024-12-15 19:43:55.944990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.093 [2024-12-15 19:43:55.948689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.093 [2024-12-15 19:43:55.948720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.093 [2024-12-15 19:43:55.948732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.093 [2024-12-15 19:43:55.951384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.093 [2024-12-15 19:43:55.951415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.093 [2024-12-15 19:43:55.951426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.093 [2024-12-15 19:43:55.954582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.093 [2024-12-15 19:43:55.954613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.093 [2024-12-15 19:43:55.954624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.093 [2024-12-15 19:43:55.957634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.093 [2024-12-15 19:43:55.957664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.093 [2024-12-15 19:43:55.957675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.093 [2024-12-15 19:43:55.960865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.093 [2024-12-15 19:43:55.960894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.093 [2024-12-15 19:43:55.960905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.093 [2024-12-15 19:43:55.963607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.093 [2024-12-15 19:43:55.963639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.093 [2024-12-15 19:43:55.963650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.093 [2024-12-15 19:43:55.967106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.093 [2024-12-15 19:43:55.967137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.093 [2024-12-15 19:43:55.967149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.093 [2024-12-15 19:43:55.971024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.093 [2024-12-15 19:43:55.971056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.093 [2024-12-15 19:43:55.971067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.093 [2024-12-15 19:43:55.974798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.093 [2024-12-15 19:43:55.974841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.093 [2024-12-15 19:43:55.974854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.382 [2024-12-15 19:43:55.978265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.382 [2024-12-15 19:43:55.978296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.382 [2024-12-15 19:43:55.978308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.382 [2024-12-15 19:43:55.981990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.382 [2024-12-15 19:43:55.982022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.382 [2024-12-15 19:43:55.982034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.382 [2024-12-15 19:43:55.985481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.382 [2024-12-15 19:43:55.985512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.382 [2024-12-15 19:43:55.985523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.382 [2024-12-15 19:43:55.988898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.382 [2024-12-15 19:43:55.988929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.382 [2024-12-15 19:43:55.988941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.382 [2024-12-15 19:43:55.992377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.382 [2024-12-15 19:43:55.992409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.382 [2024-12-15 19:43:55.992420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.382 [2024-12-15 19:43:55.996217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.382 [2024-12-15 19:43:55.996249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.382 [2024-12-15 19:43:55.996260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.382 [2024-12-15 19:43:55.999799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.382 [2024-12-15 19:43:55.999843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.382 [2024-12-15 19:43:55.999855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.382 [2024-12-15 19:43:56.003024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.382 [2024-12-15 19:43:56.003055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.382 [2024-12-15 19:43:56.003066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.382 [2024-12-15 19:43:56.006477] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.382 [2024-12-15 19:43:56.006509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.382 [2024-12-15 19:43:56.006520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.382 [2024-12-15 19:43:56.009439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.382 [2024-12-15 19:43:56.009468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.382 [2024-12-15 19:43:56.009479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.382 [2024-12-15 19:43:56.012593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.382 [2024-12-15 19:43:56.012625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.382 [2024-12-15 19:43:56.012636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.382 [2024-12-15 19:43:56.016227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.382 [2024-12-15 19:43:56.016259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.382 [2024-12-15 19:43:56.016271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.382 [2024-12-15 19:43:56.019451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.382 [2024-12-15 19:43:56.019482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.382 [2024-12-15 19:43:56.019494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.382 [2024-12-15 19:43:56.022159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.382 [2024-12-15 19:43:56.022190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.382 [2024-12-15 19:43:56.022201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.382 [2024-12-15 19:43:56.025646] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.383 [2024-12-15 19:43:56.025677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.383 [2024-12-15 19:43:56.025688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.383 [2024-12-15 19:43:56.029641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.383 [2024-12-15 19:43:56.029673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.383 [2024-12-15 19:43:56.029684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.383 [2024-12-15 19:43:56.032886] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.383 [2024-12-15 19:43:56.032916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.383 [2024-12-15 19:43:56.032928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.383 [2024-12-15 19:43:56.036166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.383 [2024-12-15 19:43:56.036196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.383 [2024-12-15 19:43:56.036208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.383 [2024-12-15 19:43:56.039227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.383 [2024-12-15 19:43:56.039257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.383 [2024-12-15 19:43:56.039268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.383 [2024-12-15 19:43:56.041728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.383 [2024-12-15 19:43:56.041759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.383 [2024-12-15 19:43:56.041770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.383 [2024-12-15 19:43:56.045094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.383 [2024-12-15 19:43:56.045124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.383 [2024-12-15 19:43:56.045136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.383 [2024-12-15 19:43:56.048569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.383 [2024-12-15 19:43:56.048601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.383 [2024-12-15 19:43:56.048613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.383 [2024-12-15 19:43:56.051607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.383 [2024-12-15 19:43:56.051638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.383 [2024-12-15 19:43:56.051649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.383 [2024-12-15 19:43:56.054745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.383 [2024-12-15 19:43:56.054776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.383 [2024-12-15 19:43:56.054788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.383 [2024-12-15 19:43:56.057772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.383 [2024-12-15 19:43:56.057802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.383 [2024-12-15 19:43:56.057824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.383 [2024-12-15 19:43:56.060709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.383 [2024-12-15 19:43:56.060740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.383 [2024-12-15 19:43:56.060751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.383 [2024-12-15 19:43:56.064136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.383 [2024-12-15 19:43:56.064166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.383 [2024-12-15 19:43:56.064178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.383 [2024-12-15 19:43:56.067954] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.383 [2024-12-15 19:43:56.067985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.383 [2024-12-15 19:43:56.067997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.383 [2024-12-15 19:43:56.071231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.383 [2024-12-15 19:43:56.071261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.383 [2024-12-15 19:43:56.071273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.383 [2024-12-15 19:43:56.073551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.383 [2024-12-15 19:43:56.073581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.383 [2024-12-15 19:43:56.073592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.383 [2024-12-15 19:43:56.077597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.383 [2024-12-15 19:43:56.077629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.383 [2024-12-15 19:43:56.077640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.383 [2024-12-15 19:43:56.080548] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.383 [2024-12-15 19:43:56.080579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.383 [2024-12-15 19:43:56.080590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.383 [2024-12-15 19:43:56.084022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.383 [2024-12-15 19:43:56.084053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.383 [2024-12-15 19:43:56.084065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.383 [2024-12-15 19:43:56.087443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.383 [2024-12-15 19:43:56.087474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.383 [2024-12-15 19:43:56.087485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.383 [2024-12-15 19:43:56.090672] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.383 [2024-12-15 19:43:56.090704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.383 [2024-12-15 19:43:56.090716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.383 [2024-12-15 19:43:56.093853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.383 [2024-12-15 19:43:56.093883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.383 [2024-12-15 19:43:56.093894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.383 [2024-12-15 19:43:56.097351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.383 [2024-12-15 19:43:56.097382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.383 [2024-12-15 19:43:56.097393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.383 [2024-12-15 19:43:56.100064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.383 [2024-12-15 19:43:56.100094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.383 [2024-12-15 19:43:56.100105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.383 [2024-12-15 19:43:56.102663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.383 [2024-12-15 19:43:56.102694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.383 [2024-12-15 19:43:56.102706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.383 [2024-12-15 19:43:56.106010] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.383 [2024-12-15 19:43:56.106040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.383 [2024-12-15 19:43:56.106052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.383 [2024-12-15 19:43:56.109138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.383 [2024-12-15 19:43:56.109169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.384 [2024-12-15 19:43:56.109181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.384 [2024-12-15 19:43:56.112537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.384 [2024-12-15 19:43:56.112567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.384 [2024-12-15 19:43:56.112578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.384 [2024-12-15 19:43:56.115850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.384 [2024-12-15 19:43:56.115880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.384 [2024-12-15 19:43:56.115891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.384 [2024-12-15 19:43:56.119076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.384 [2024-12-15 19:43:56.119106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.384 [2024-12-15 19:43:56.119118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.384 [2024-12-15 19:43:56.122375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.384 [2024-12-15 19:43:56.122407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.384 [2024-12-15 19:43:56.122418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.384 [2024-12-15 19:43:56.125337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.384 [2024-12-15 19:43:56.125366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.384 [2024-12-15 19:43:56.125378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.384 [2024-12-15 19:43:56.128655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.384 [2024-12-15 19:43:56.128686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.384 [2024-12-15 19:43:56.128697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.384 [2024-12-15 19:43:56.131541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.384 [2024-12-15 19:43:56.131572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.384 [2024-12-15 19:43:56.131583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.384 [2024-12-15 19:43:56.135028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.384 [2024-12-15 19:43:56.135059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.384 [2024-12-15 19:43:56.135070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.384 [2024-12-15 19:43:56.137916] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.384 [2024-12-15 19:43:56.137946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.384 [2024-12-15 19:43:56.137957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.384 [2024-12-15 19:43:56.141096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.384 [2024-12-15 19:43:56.141126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.384 [2024-12-15 19:43:56.141138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.384 [2024-12-15 19:43:56.144307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.384 [2024-12-15 19:43:56.144338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.384 [2024-12-15 19:43:56.144349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.384 [2024-12-15 19:43:56.147664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.384 [2024-12-15 19:43:56.147695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.384 [2024-12-15 19:43:56.147706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.384 [2024-12-15 19:43:56.151067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.384 [2024-12-15 19:43:56.151099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.384 [2024-12-15 19:43:56.151110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.384 [2024-12-15 19:43:56.153846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.384 [2024-12-15 19:43:56.153876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.384 [2024-12-15 19:43:56.153887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.384 [2024-12-15 19:43:56.157254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.384 [2024-12-15 19:43:56.157286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.384 [2024-12-15 19:43:56.157297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.384 [2024-12-15 19:43:56.160080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.384 [2024-12-15 19:43:56.160110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.384 [2024-12-15 19:43:56.160122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.384 [2024-12-15 19:43:56.163408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.384 [2024-12-15 19:43:56.163439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.384 [2024-12-15 19:43:56.163449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.384 [2024-12-15 19:43:56.166606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.384 [2024-12-15 19:43:56.166638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.384 [2024-12-15 19:43:56.166650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.384 [2024-12-15 19:43:56.170232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.384 [2024-12-15 19:43:56.170263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.384 [2024-12-15 19:43:56.170274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.384 [2024-12-15 19:43:56.173710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.384 [2024-12-15 19:43:56.173742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.384 [2024-12-15 19:43:56.173753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.384 [2024-12-15 19:43:56.176949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.384 [2024-12-15 19:43:56.176980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.384 [2024-12-15 19:43:56.176991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.384 [2024-12-15 19:43:56.180155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.384 [2024-12-15 19:43:56.180186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.384 [2024-12-15 19:43:56.180197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.384 [2024-12-15 19:43:56.183222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.384 [2024-12-15 19:43:56.183252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.384 [2024-12-15 19:43:56.183264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.384 [2024-12-15 19:43:56.186541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.384 [2024-12-15 19:43:56.186572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.384 [2024-12-15 19:43:56.186583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.384 [2024-12-15 19:43:56.189979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.384 [2024-12-15 19:43:56.190010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.384 [2024-12-15 19:43:56.190022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.384 [2024-12-15 19:43:56.193406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.384 [2024-12-15 19:43:56.193437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.384 [2024-12-15 19:43:56.193448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.384 [2024-12-15 19:43:56.196920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.384 [2024-12-15 19:43:56.196950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.384 [2024-12-15 19:43:56.196962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.385 [2024-12-15 19:43:56.200438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.385 [2024-12-15 19:43:56.200470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.385 [2024-12-15 19:43:56.200481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.385 [2024-12-15 19:43:56.204306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.385 [2024-12-15 19:43:56.204338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.385 [2024-12-15 19:43:56.204350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.385 [2024-12-15 19:43:56.207556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.385 [2024-12-15 19:43:56.207589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.385 [2024-12-15 19:43:56.207601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.385 [2024-12-15 19:43:56.210940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.385 [2024-12-15 19:43:56.210972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.385 [2024-12-15 19:43:56.210983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.385 [2024-12-15 19:43:56.214152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.385 [2024-12-15 19:43:56.214184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.385 [2024-12-15 19:43:56.214195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.385 [2024-12-15 19:43:56.218073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.385 [2024-12-15 19:43:56.218104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.385 [2024-12-15 19:43:56.218116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.385 [2024-12-15 19:43:56.221634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.385 [2024-12-15 19:43:56.221666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.385 [2024-12-15 19:43:56.221677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.385 [2024-12-15 19:43:56.225260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.385 [2024-12-15 19:43:56.225291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.385 [2024-12-15 19:43:56.225303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.385 [2024-12-15 19:43:56.228811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.385 [2024-12-15 19:43:56.228858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.385 [2024-12-15 19:43:56.228870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.385 [2024-12-15 19:43:56.231765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.385 [2024-12-15 19:43:56.231796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.385 [2024-12-15 19:43:56.231808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.385 [2024-12-15 19:43:56.234566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.385 [2024-12-15 19:43:56.234597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.385 [2024-12-15 19:43:56.234609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.385 [2024-12-15 19:43:56.237764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.385 [2024-12-15 19:43:56.237794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.385 [2024-12-15 19:43:56.237805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.385 [2024-12-15 19:43:56.241260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.385 [2024-12-15 19:43:56.241292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.385 [2024-12-15 19:43:56.241303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.385 [2024-12-15 19:43:56.244735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.385 [2024-12-15 19:43:56.244767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.385 [2024-12-15 19:43:56.244778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.385 [2024-12-15 19:43:56.247621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.385 [2024-12-15 19:43:56.247652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.385 [2024-12-15 19:43:56.247664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.385 [2024-12-15 19:43:56.251125] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.385 [2024-12-15 19:43:56.251156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.385 [2024-12-15 19:43:56.251167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.385 [2024-12-15 19:43:56.254088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.385 [2024-12-15 19:43:56.254119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.385 [2024-12-15 19:43:56.254131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.385 [2024-12-15 19:43:56.257123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.385 [2024-12-15 19:43:56.257155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.385 [2024-12-15 19:43:56.257166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.385 [2024-12-15 19:43:56.260434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.385 [2024-12-15 19:43:56.260464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.385 [2024-12-15 19:43:56.260474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.385 [2024-12-15 19:43:56.263249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.385 [2024-12-15 19:43:56.263279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.385 [2024-12-15 19:43:56.263290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.385 [2024-12-15 19:43:56.266668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.385 [2024-12-15 19:43:56.266697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.385 [2024-12-15 19:43:56.266708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.385 [2024-12-15 19:43:56.269882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.385 [2024-12-15 19:43:56.269911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.385 [2024-12-15 19:43:56.269923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.385 [2024-12-15 19:43:56.273007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.385 [2024-12-15 19:43:56.273036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.385 [2024-12-15 19:43:56.273046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.645 [2024-12-15 19:43:56.276267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.645 [2024-12-15 19:43:56.276298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.645 [2024-12-15 19:43:56.276309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.645 [2024-12-15 19:43:56.279086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.645 [2024-12-15 19:43:56.279116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.645 [2024-12-15 19:43:56.279127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.645 [2024-12-15 19:43:56.282671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.645 [2024-12-15 19:43:56.282702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.645 [2024-12-15 19:43:56.282713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.645 [2024-12-15 19:43:56.286114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.645 [2024-12-15 19:43:56.286144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.645 [2024-12-15 19:43:56.286156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.645 [2024-12-15 19:43:56.289017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.645 [2024-12-15 19:43:56.289048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.645 [2024-12-15 19:43:56.289060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.645 [2024-12-15 19:43:56.291855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.645 [2024-12-15 19:43:56.291885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.645 [2024-12-15 19:43:56.291896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.645 [2024-12-15 19:43:56.295516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.645 [2024-12-15 19:43:56.295547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.645 [2024-12-15 19:43:56.295559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.645 [2024-12-15 19:43:56.298563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.645 [2024-12-15 19:43:56.298593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.645 [2024-12-15 19:43:56.298605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.645 [2024-12-15 19:43:56.301638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.645 [2024-12-15 19:43:56.301668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.645 [2024-12-15 19:43:56.301679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.645 [2024-12-15 19:43:56.305047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.645 [2024-12-15 19:43:56.305078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.645 [2024-12-15 19:43:56.305089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.646 [2024-12-15 19:43:56.307881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.646 [2024-12-15 19:43:56.307910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.646 [2024-12-15 19:43:56.307921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.646 [2024-12-15 19:43:56.311196] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.646 [2024-12-15 19:43:56.311226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.646 [2024-12-15 19:43:56.311237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.646 [2024-12-15 19:43:56.314053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.646 [2024-12-15 19:43:56.314084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.646 [2024-12-15 19:43:56.314095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.646 [2024-12-15 19:43:56.317322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.646 [2024-12-15 19:43:56.317353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.646 [2024-12-15 19:43:56.317365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.646 [2024-12-15 19:43:56.320640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.646 [2024-12-15 19:43:56.320671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.646 [2024-12-15 19:43:56.320683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.646 [2024-12-15 19:43:56.324080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.646 [2024-12-15 19:43:56.324111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.646 [2024-12-15 19:43:56.324122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.646 [2024-12-15 19:43:56.327351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.646 [2024-12-15 19:43:56.327381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.646 [2024-12-15 19:43:56.327392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.646 [2024-12-15 19:43:56.329730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.646 [2024-12-15 19:43:56.329759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.646 [2024-12-15 19:43:56.329771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.646 [2024-12-15 19:43:56.332660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.646 [2024-12-15 19:43:56.332690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.646 [2024-12-15 19:43:56.332702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.646 [2024-12-15 19:43:56.335925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.646 [2024-12-15 19:43:56.335954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.646 [2024-12-15 19:43:56.335966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.646 [2024-12-15 19:43:56.338837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.646 [2024-12-15 19:43:56.338865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.646 [2024-12-15 19:43:56.338876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.646 [2024-12-15 19:43:56.342045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.646 [2024-12-15 19:43:56.342074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.646 [2024-12-15 19:43:56.342086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.646 [2024-12-15 19:43:56.345331] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.646 [2024-12-15 19:43:56.345361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.646 [2024-12-15 19:43:56.345372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.646 [2024-12-15 19:43:56.348765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.646 [2024-12-15 19:43:56.348797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.646 [2024-12-15 19:43:56.348808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.646 [2024-12-15 19:43:56.352279] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.646 [2024-12-15 19:43:56.352311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.646 [2024-12-15 19:43:56.352322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.646 [2024-12-15 19:43:56.355481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.646 [2024-12-15 19:43:56.355512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.646 [2024-12-15 19:43:56.355523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.646 [2024-12-15 19:43:56.359189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.646 [2024-12-15 19:43:56.359220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.646 [2024-12-15 19:43:56.359231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.646 [2024-12-15 19:43:56.362584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.646 [2024-12-15 19:43:56.362615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.646 [2024-12-15 19:43:56.362626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.646 [2024-12-15 19:43:56.365918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.646 [2024-12-15 19:43:56.365949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.646 [2024-12-15 19:43:56.365960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.646 [2024-12-15 19:43:56.369075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.646 [2024-12-15 19:43:56.369106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.646 [2024-12-15 19:43:56.369118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.646 [2024-12-15 19:43:56.372182] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.646 [2024-12-15 19:43:56.372213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.646 [2024-12-15 19:43:56.372224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.646 [2024-12-15 19:43:56.375515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.646 [2024-12-15 19:43:56.375546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.646 [2024-12-15 19:43:56.375558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.646 [2024-12-15 19:43:56.378843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.646 [2024-12-15 19:43:56.378874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.646 [2024-12-15 19:43:56.378885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.646 [2024-12-15 19:43:56.381558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.646 [2024-12-15 19:43:56.381589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.646 [2024-12-15 19:43:56.381600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.646 [2024-12-15 19:43:56.384617] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.646 [2024-12-15 19:43:56.384648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.646 [2024-12-15 19:43:56.384660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.646 [2024-12-15 19:43:56.388284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.646 [2024-12-15 19:43:56.388315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.646 [2024-12-15 19:43:56.388326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.646 [2024-12-15 19:43:56.391780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.646 [2024-12-15 19:43:56.391812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.647 [2024-12-15 19:43:56.391836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.647 [2024-12-15 19:43:56.394389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b24a0) 00:23:09.647 [2024-12-15 19:43:56.394420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.647 [2024-12-15 19:43:56.394431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.647 00:23:09.647 Latency(us) 00:23:09.647 [2024-12-15T19:43:56.543Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.647 [2024-12-15T19:43:56.543Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:09.647 nvme0n1 : 2.00 9574.42 1196.80 0.00 0.00 1668.14 484.07 4885.41 00:23:09.647 [2024-12-15T19:43:56.543Z] =================================================================================================================== 00:23:09.647 [2024-12-15T19:43:56.543Z] Total : 9574.42 1196.80 0.00 0.00 1668.14 484.07 4885.41 00:23:09.647 0 00:23:09.647 19:43:56 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:09.647 19:43:56 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:09.647 19:43:56 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:09.647 | .driver_specific 00:23:09.647 | .nvme_error 00:23:09.647 | .status_code 00:23:09.647 | .command_transient_transport_error' 00:23:09.647 19:43:56 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:09.905 19:43:56 -- host/digest.sh@71 -- # (( 618 > 0 )) 00:23:09.905 19:43:56 -- host/digest.sh@73 -- # killprocess 97704 00:23:09.905 19:43:56 -- common/autotest_common.sh@936 -- # '[' -z 97704 ']' 00:23:09.906 19:43:56 -- common/autotest_common.sh@940 -- # kill -0 97704 00:23:09.906 19:43:56 -- common/autotest_common.sh@941 -- # uname 00:23:09.906 19:43:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:09.906 19:43:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97704 00:23:09.906 19:43:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:09.906 19:43:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:09.906 killing process with pid 97704 00:23:09.906 19:43:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97704' 00:23:09.906 19:43:56 -- common/autotest_common.sh@955 -- # kill 97704 00:23:09.906 Received shutdown signal, test time was about 2.000000 seconds 00:23:09.906 00:23:09.906 Latency(us) 00:23:09.906 [2024-12-15T19:43:56.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.906 [2024-12-15T19:43:56.802Z] =================================================================================================================== 00:23:09.906 [2024-12-15T19:43:56.802Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:09.906 19:43:56 -- common/autotest_common.sh@960 -- # wait 97704 00:23:10.472 19:43:57 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:23:10.473 19:43:57 -- host/digest.sh@54 -- # local rw bs qd 00:23:10.473 19:43:57 -- host/digest.sh@56 -- # rw=randwrite 00:23:10.473 19:43:57 -- host/digest.sh@56 -- # bs=4096 00:23:10.473 19:43:57 -- host/digest.sh@56 -- # qd=128 00:23:10.473 19:43:57 -- host/digest.sh@58 -- # bperfpid=97799 00:23:10.473 19:43:57 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:23:10.473 19:43:57 -- host/digest.sh@60 -- # waitforlisten 97799 /var/tmp/bperf.sock 00:23:10.473 19:43:57 -- common/autotest_common.sh@829 -- # '[' -z 97799 ']' 00:23:10.473 19:43:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:10.473 19:43:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:10.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:10.473 19:43:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:10.473 19:43:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:10.473 19:43:57 -- common/autotest_common.sh@10 -- # set +x 00:23:10.473 [2024-12-15 19:43:57.111904] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:10.473 [2024-12-15 19:43:57.112005] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97799 ] 00:23:10.473 [2024-12-15 19:43:57.243183] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.473 [2024-12-15 19:43:57.327898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:11.407 19:43:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:11.407 19:43:58 -- common/autotest_common.sh@862 -- # return 0 00:23:11.407 19:43:58 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:11.408 19:43:58 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:11.666 19:43:58 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:11.666 19:43:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.666 19:43:58 -- common/autotest_common.sh@10 -- # set +x 00:23:11.666 19:43:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.666 19:43:58 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:11.666 19:43:58 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:11.925 nvme0n1 00:23:11.925 19:43:58 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:11.925 19:43:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.925 19:43:58 -- common/autotest_common.sh@10 -- # set +x 00:23:11.925 19:43:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.925 19:43:58 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:11.925 19:43:58 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:12.184 Running I/O for 2 seconds... 00:23:12.184 [2024-12-15 19:43:58.879741] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f6890 00:23:12.184 [2024-12-15 19:43:58.880152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-12-15 19:43:58.880182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:12.184 [2024-12-15 19:43:58.890722] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190fd640 00:23:12.184 [2024-12-15 19:43:58.891542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-12-15 19:43:58.891572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:12.184 [2024-12-15 19:43:58.899546] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f1ca0 00:23:12.184 [2024-12-15 19:43:58.900680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-12-15 19:43:58.900709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:12.184 [2024-12-15 19:43:58.908799] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e1710 00:23:12.184 [2024-12-15 19:43:58.909208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-12-15 19:43:58.909237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:12.184 [2024-12-15 19:43:58.918495] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190de470 00:23:12.184 [2024-12-15 19:43:58.919381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-12-15 19:43:58.919414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:12.184 [2024-12-15 19:43:58.927491] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e5ec8 00:23:12.184 [2024-12-15 19:43:58.928408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-12-15 19:43:58.928438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:12.184 [2024-12-15 19:43:58.937866] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e0a68 00:23:12.184 [2024-12-15 19:43:58.938789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-12-15 19:43:58.938833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:12.184 [2024-12-15 19:43:58.944641] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e3060 00:23:12.184 [2024-12-15 19:43:58.944794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-12-15 19:43:58.944813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:12.184 [2024-12-15 19:43:58.955267] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190fac10 00:23:12.184 [2024-12-15 19:43:58.955814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-12-15 19:43:58.955858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:12.184 [2024-12-15 19:43:58.963573] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e73e0 00:23:12.184 [2024-12-15 19:43:58.964481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-12-15 19:43:58.964509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.184 [2024-12-15 19:43:58.972816] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e1b48 00:23:12.184 [2024-12-15 19:43:58.973163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-12-15 19:43:58.973189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:12.184 [2024-12-15 19:43:58.982366] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190fdeb0 00:23:12.184 [2024-12-15 19:43:58.982772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-12-15 19:43:58.982798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:12.184 [2024-12-15 19:43:58.992377] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e99d8 00:23:12.184 [2024-12-15 19:43:58.993541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-12-15 19:43:58.993571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:12.184 [2024-12-15 19:43:59.001798] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190fc560 00:23:12.184 [2024-12-15 19:43:59.002362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-12-15 19:43:59.002402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:12.184 [2024-12-15 19:43:59.011337] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190fa3a0 00:23:12.184 [2024-12-15 19:43:59.011938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-12-15 19:43:59.011967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:12.184 [2024-12-15 19:43:59.020471] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f6890 00:23:12.184 [2024-12-15 19:43:59.021289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-12-15 19:43:59.021318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:12.184 [2024-12-15 19:43:59.029702] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f6020 00:23:12.184 [2024-12-15 19:43:59.030500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-12-15 19:43:59.030530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:12.184 [2024-12-15 19:43:59.039647] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e4578 00:23:12.184 [2024-12-15 19:43:59.040170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-12-15 19:43:59.040224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:12.184 [2024-12-15 19:43:59.048311] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f57b0 00:23:12.184 [2024-12-15 19:43:59.049172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-12-15 19:43:59.049200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:12.184 [2024-12-15 19:43:59.057498] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e0a68 00:23:12.184 [2024-12-15 19:43:59.058893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-12-15 19:43:59.058922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:12.184 [2024-12-15 19:43:59.066654] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f5378 00:23:12.184 [2024-12-15 19:43:59.068125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-12-15 19:43:59.068163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:12.184 [2024-12-15 19:43:59.076023] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f1ca0 00:23:12.185 [2024-12-15 19:43:59.077250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.185 [2024-12-15 19:43:59.077278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:12.444 [2024-12-15 19:43:59.085680] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190fcdd0 00:23:12.444 [2024-12-15 19:43:59.087038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.444 [2024-12-15 19:43:59.087077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:12.444 [2024-12-15 19:43:59.096185] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190ee190 00:23:12.444 [2024-12-15 19:43:59.097124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.444 [2024-12-15 19:43:59.097161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:12.444 [2024-12-15 19:43:59.103196] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f7da8 00:23:12.444 [2024-12-15 19:43:59.103336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.444 [2024-12-15 19:43:59.103356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:12.444 [2024-12-15 19:43:59.114194] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e84c0 00:23:12.444 [2024-12-15 19:43:59.114618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.444 [2024-12-15 19:43:59.114644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:12.444 [2024-12-15 19:43:59.124505] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f20d8 00:23:12.444 [2024-12-15 19:43:59.125508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.444 [2024-12-15 19:43:59.125554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:12.444 [2024-12-15 19:43:59.132518] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e5a90 00:23:12.444 [2024-12-15 19:43:59.132841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.444 [2024-12-15 19:43:59.132874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:12.444 [2024-12-15 19:43:59.142930] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e6738 00:23:12.444 [2024-12-15 19:43:59.143844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.444 [2024-12-15 19:43:59.143871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:12.444 [2024-12-15 19:43:59.150545] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190ec840 00:23:12.444 [2024-12-15 19:43:59.151247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.444 [2024-12-15 19:43:59.151276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:12.444 [2024-12-15 19:43:59.161007] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f3e60 00:23:12.444 [2024-12-15 19:43:59.161669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.444 [2024-12-15 19:43:59.161696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:12.444 [2024-12-15 19:43:59.169732] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f96f8 00:23:12.444 [2024-12-15 19:43:59.170874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.444 [2024-12-15 19:43:59.170902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:12.444 [2024-12-15 19:43:59.179197] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f0788 00:23:12.444 [2024-12-15 19:43:59.179648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.444 [2024-12-15 19:43:59.179671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:12.444 [2024-12-15 19:43:59.190677] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e88f8 00:23:12.444 [2024-12-15 19:43:59.191657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:10072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.444 [2024-12-15 19:43:59.191694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:12.444 [2024-12-15 19:43:59.197130] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190fbcf0 00:23:12.444 [2024-12-15 19:43:59.197923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.444 [2024-12-15 19:43:59.197951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:12.444 [2024-12-15 19:43:59.206460] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f8a50 00:23:12.444 [2024-12-15 19:43:59.206565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.444 [2024-12-15 19:43:59.206586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:12.444 [2024-12-15 19:43:59.215970] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f6890 00:23:12.444 [2024-12-15 19:43:59.216252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.444 [2024-12-15 19:43:59.216276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:12.444 [2024-12-15 19:43:59.226133] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190ec408 00:23:12.444 [2024-12-15 19:43:59.226790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.444 [2024-12-15 19:43:59.226828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:12.444 [2024-12-15 19:43:59.235745] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f8e88 00:23:12.444 [2024-12-15 19:43:59.236427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.444 [2024-12-15 19:43:59.236456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:12.444 [2024-12-15 19:43:59.243939] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190dece0 00:23:12.444 [2024-12-15 19:43:59.244192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.444 [2024-12-15 19:43:59.244211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:12.444 [2024-12-15 19:43:59.253994] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190de038 00:23:12.444 [2024-12-15 19:43:59.254399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.444 [2024-12-15 19:43:59.254424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:12.444 [2024-12-15 19:43:59.263497] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e7c50 00:23:12.444 [2024-12-15 19:43:59.264206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.444 [2024-12-15 19:43:59.264235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:12.444 [2024-12-15 19:43:59.271771] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e12d8 00:23:12.444 [2024-12-15 19:43:59.272036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.444 [2024-12-15 19:43:59.272061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:12.444 [2024-12-15 19:43:59.283115] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e0630 00:23:12.444 [2024-12-15 19:43:59.283748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.444 [2024-12-15 19:43:59.283776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:12.444 [2024-12-15 19:43:59.292309] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e9168 00:23:12.444 [2024-12-15 19:43:59.293866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.444 [2024-12-15 19:43:59.293893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:12.445 [2024-12-15 19:43:59.300676] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190ecc78 00:23:12.445 [2024-12-15 19:43:59.301732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.445 [2024-12-15 19:43:59.301759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:12.445 [2024-12-15 19:43:59.309947] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f35f0 00:23:12.445 [2024-12-15 19:43:59.310230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.445 [2024-12-15 19:43:59.310249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:12.445 [2024-12-15 19:43:59.319533] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f31b8 00:23:12.445 [2024-12-15 19:43:59.320337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.445 [2024-12-15 19:43:59.320366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:12.445 [2024-12-15 19:43:59.328761] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f2948 00:23:12.445 [2024-12-15 19:43:59.329962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.445 [2024-12-15 19:43:59.329990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:12.704 [2024-12-15 19:43:59.338320] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f46d0 00:23:12.704 [2024-12-15 19:43:59.339581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.704 [2024-12-15 19:43:59.339611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:12.704 [2024-12-15 19:43:59.347632] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e73e0 00:23:12.704 [2024-12-15 19:43:59.348081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.704 [2024-12-15 19:43:59.348107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:12.704 [2024-12-15 19:43:59.356772] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e9168 00:23:12.704 [2024-12-15 19:43:59.357176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.704 [2024-12-15 19:43:59.357202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:12.704 [2024-12-15 19:43:59.367209] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e73e0 00:23:12.704 [2024-12-15 19:43:59.368166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.704 [2024-12-15 19:43:59.368193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:12.704 [2024-12-15 19:43:59.376453] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190de470 00:23:12.704 [2024-12-15 19:43:59.377736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.704 [2024-12-15 19:43:59.377764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:12.704 [2024-12-15 19:43:59.384294] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f81e0 00:23:12.704 [2024-12-15 19:43:59.385275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.704 [2024-12-15 19:43:59.385314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:12.704 [2024-12-15 19:43:59.392923] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e4de8 00:23:12.704 [2024-12-15 19:43:59.393104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.704 [2024-12-15 19:43:59.393124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:12.704 [2024-12-15 19:43:59.402256] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190ef6a8 00:23:12.704 [2024-12-15 19:43:59.402445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.704 [2024-12-15 19:43:59.402465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:12.704 [2024-12-15 19:43:59.412019] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e7818 00:23:12.704 [2024-12-15 19:43:59.413030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.704 [2024-12-15 19:43:59.413059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:12.704 [2024-12-15 19:43:59.423461] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e3d08 00:23:12.704 [2024-12-15 19:43:59.424046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.704 [2024-12-15 19:43:59.424072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:12.704 [2024-12-15 19:43:59.433196] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f0bc0 00:23:12.704 [2024-12-15 19:43:59.433753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.704 [2024-12-15 19:43:59.433783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:12.704 [2024-12-15 19:43:59.442459] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e4578 00:23:12.704 [2024-12-15 19:43:59.442993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.704 [2024-12-15 19:43:59.443021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:12.704 [2024-12-15 19:43:59.451787] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f3a28 00:23:12.704 [2024-12-15 19:43:59.452485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.704 [2024-12-15 19:43:59.452515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:12.704 [2024-12-15 19:43:59.460127] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e8088 00:23:12.704 [2024-12-15 19:43:59.461154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.704 [2024-12-15 19:43:59.461182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:12.704 [2024-12-15 19:43:59.469900] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190eea00 00:23:12.704 [2024-12-15 19:43:59.471192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.704 [2024-12-15 19:43:59.471240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:12.704 [2024-12-15 19:43:59.480000] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e99d8 00:23:12.704 [2024-12-15 19:43:59.480786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.704 [2024-12-15 19:43:59.480835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:12.704 [2024-12-15 19:43:59.488085] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f3e60 00:23:12.704 [2024-12-15 19:43:59.488979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.705 [2024-12-15 19:43:59.489018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:12.705 [2024-12-15 19:43:59.496230] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e9168 00:23:12.705 [2024-12-15 19:43:59.496383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.705 [2024-12-15 19:43:59.496402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:12.705 [2024-12-15 19:43:59.504907] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190fbcf0 00:23:12.705 [2024-12-15 19:43:59.505855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.705 [2024-12-15 19:43:59.505900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:12.705 [2024-12-15 19:43:59.515524] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f6890 00:23:12.705 [2024-12-15 19:43:59.516767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.705 [2024-12-15 19:43:59.516796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:12.705 [2024-12-15 19:43:59.524793] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e2c28 00:23:12.705 [2024-12-15 19:43:59.525354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.705 [2024-12-15 19:43:59.525397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:12.705 [2024-12-15 19:43:59.533907] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e84c0 00:23:12.705 [2024-12-15 19:43:59.535169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.705 [2024-12-15 19:43:59.535198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:12.705 [2024-12-15 19:43:59.542760] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190de038 00:23:12.705 [2024-12-15 19:43:59.543328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.705 [2024-12-15 19:43:59.543357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:12.705 [2024-12-15 19:43:59.552147] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190fe2e8 00:23:12.705 [2024-12-15 19:43:59.553082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.705 [2024-12-15 19:43:59.553117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:12.705 [2024-12-15 19:43:59.561745] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e9e10 00:23:12.705 [2024-12-15 19:43:59.562564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.705 [2024-12-15 19:43:59.562592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:12.705 [2024-12-15 19:43:59.570447] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e4de8 00:23:12.705 [2024-12-15 19:43:59.571667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.705 [2024-12-15 19:43:59.571694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:12.705 [2024-12-15 19:43:59.579838] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190fc560 00:23:12.705 [2024-12-15 19:43:59.580424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.705 [2024-12-15 19:43:59.580467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:12.705 [2024-12-15 19:43:59.588230] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190fac10 00:23:12.705 [2024-12-15 19:43:59.589201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.705 [2024-12-15 19:43:59.589229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:12.705 [2024-12-15 19:43:59.597542] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190fef90 00:23:12.705 [2024-12-15 19:43:59.597750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.705 [2024-12-15 19:43:59.597768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:12.964 [2024-12-15 19:43:59.607083] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190ebb98 00:23:12.964 [2024-12-15 19:43:59.607858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.964 [2024-12-15 19:43:59.607893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:12.964 [2024-12-15 19:43:59.616330] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190fc128 00:23:12.964 [2024-12-15 19:43:59.617470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.964 [2024-12-15 19:43:59.617498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:12.964 [2024-12-15 19:43:59.625935] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e7818 00:23:12.964 [2024-12-15 19:43:59.627065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.964 [2024-12-15 19:43:59.627102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:12.964 [2024-12-15 19:43:59.635567] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f6cc8 00:23:12.964 [2024-12-15 19:43:59.636035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.964 [2024-12-15 19:43:59.636092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:12.964 [2024-12-15 19:43:59.644750] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190ddc00 00:23:12.964 [2024-12-15 19:43:59.645194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.964 [2024-12-15 19:43:59.645224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:12.964 [2024-12-15 19:43:59.654975] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e95a0 00:23:12.964 [2024-12-15 19:43:59.655997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:10579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.965 [2024-12-15 19:43:59.656024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:12.965 [2024-12-15 19:43:59.662395] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190df550 00:23:12.965 [2024-12-15 19:43:59.662840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.965 [2024-12-15 19:43:59.662865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.965 [2024-12-15 19:43:59.673353] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190fc560 00:23:12.965 [2024-12-15 19:43:59.674527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.965 [2024-12-15 19:43:59.674556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.965 [2024-12-15 19:43:59.682276] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f7538 00:23:12.965 [2024-12-15 19:43:59.683653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.965 [2024-12-15 19:43:59.683682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.965 [2024-12-15 19:43:59.690566] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f5378 00:23:12.965 [2024-12-15 19:43:59.691780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.965 [2024-12-15 19:43:59.691808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:12.965 [2024-12-15 19:43:59.699948] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190dece0 00:23:12.965 [2024-12-15 19:43:59.700539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.965 [2024-12-15 19:43:59.700566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:12.965 [2024-12-15 19:43:59.709302] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190ecc78 00:23:12.965 [2024-12-15 19:43:59.709955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.965 [2024-12-15 19:43:59.709983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:12.965 [2024-12-15 19:43:59.717355] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190ed920 00:23:12.965 [2024-12-15 19:43:59.717579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.965 [2024-12-15 19:43:59.717597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:12.965 [2024-12-15 19:43:59.728710] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f46d0 00:23:12.965 [2024-12-15 19:43:59.729480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.965 [2024-12-15 19:43:59.729508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:12.965 [2024-12-15 19:43:59.736686] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190eb328 00:23:12.965 [2024-12-15 19:43:59.737603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.965 [2024-12-15 19:43:59.737630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:12.965 [2024-12-15 19:43:59.745966] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f92c0 00:23:12.965 [2024-12-15 19:43:59.746352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.965 [2024-12-15 19:43:59.746376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:12.965 [2024-12-15 19:43:59.756751] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190dece0 00:23:12.965 [2024-12-15 19:43:59.757655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.965 [2024-12-15 19:43:59.757682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:12.965 [2024-12-15 19:43:59.763178] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f6cc8 00:23:12.965 [2024-12-15 19:43:59.763348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.965 [2024-12-15 19:43:59.763366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:12.965 [2024-12-15 19:43:59.773732] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190fdeb0 00:23:12.965 [2024-12-15 19:43:59.774301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.965 [2024-12-15 19:43:59.774325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:12.965 [2024-12-15 19:43:59.782386] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e5ec8 00:23:12.965 [2024-12-15 19:43:59.782965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.965 [2024-12-15 19:43:59.782994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:12.965 [2024-12-15 19:43:59.790254] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f31b8 00:23:12.965 [2024-12-15 19:43:59.791257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:10833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.965 [2024-12-15 19:43:59.791286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:12.965 [2024-12-15 19:43:59.798586] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190ebfd0 00:23:12.965 [2024-12-15 19:43:59.798762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.965 [2024-12-15 19:43:59.798781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.965 [2024-12-15 19:43:59.809661] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190fc560 00:23:12.965 [2024-12-15 19:43:59.810603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.965 [2024-12-15 19:43:59.810631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:12.965 [2024-12-15 19:43:59.816219] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190df118 00:23:12.965 [2024-12-15 19:43:59.816421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.965 [2024-12-15 19:43:59.816441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:12.965 [2024-12-15 19:43:59.826884] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e6fa8 00:23:12.965 [2024-12-15 19:43:59.827470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.965 [2024-12-15 19:43:59.827497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:12.965 [2024-12-15 19:43:59.835521] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190fdeb0 00:23:12.965 [2024-12-15 19:43:59.836746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.965 [2024-12-15 19:43:59.836775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:12.965 [2024-12-15 19:43:59.843157] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190eee38 00:23:12.965 [2024-12-15 19:43:59.843842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:18162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.965 [2024-12-15 19:43:59.843869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:12.965 [2024-12-15 19:43:59.851908] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190efae0 00:23:12.965 [2024-12-15 19:43:59.852767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.965 [2024-12-15 19:43:59.852795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:13.224 [2024-12-15 19:43:59.860504] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190ec408 00:23:13.224 [2024-12-15 19:43:59.861539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.224 [2024-12-15 19:43:59.861568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:13.224 [2024-12-15 19:43:59.870475] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190ef270 00:23:13.224 [2024-12-15 19:43:59.871116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.224 [2024-12-15 19:43:59.871145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.224 [2024-12-15 19:43:59.879774] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190ea680 00:23:13.224 [2024-12-15 19:43:59.880377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.224 [2024-12-15 19:43:59.880396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:13.224 [2024-12-15 19:43:59.888725] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190eee38 00:23:13.224 [2024-12-15 19:43:59.889340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.224 [2024-12-15 19:43:59.889369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:13.225 [2024-12-15 19:43:59.897106] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190feb58 00:23:13.225 [2024-12-15 19:43:59.897994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.225 [2024-12-15 19:43:59.898023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:13.225 [2024-12-15 19:43:59.906761] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f0788 00:23:13.225 [2024-12-15 19:43:59.908237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.225 [2024-12-15 19:43:59.908266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:13.225 [2024-12-15 19:43:59.916459] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f7da8 00:23:13.225 [2024-12-15 19:43:59.917973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.225 [2024-12-15 19:43:59.918001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:13.225 [2024-12-15 19:43:59.925470] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e6738 00:23:13.225 [2024-12-15 19:43:59.926910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.225 [2024-12-15 19:43:59.926938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.225 [2024-12-15 19:43:59.934557] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f4298 00:23:13.225 [2024-12-15 19:43:59.935477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.225 [2024-12-15 19:43:59.935505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.225 [2024-12-15 19:43:59.943670] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190de8a8 00:23:13.225 [2024-12-15 19:43:59.944665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.225 [2024-12-15 19:43:59.944693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.225 [2024-12-15 19:43:59.952387] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190fdeb0 00:23:13.225 [2024-12-15 19:43:59.953327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.225 [2024-12-15 19:43:59.953355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.225 [2024-12-15 19:43:59.960339] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f1868 00:23:13.225 [2024-12-15 19:43:59.961011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.225 [2024-12-15 19:43:59.961039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:13.225 [2024-12-15 19:43:59.969330] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190ec840 00:23:13.225 [2024-12-15 19:43:59.969629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.225 [2024-12-15 19:43:59.969648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:13.225 [2024-12-15 19:43:59.978000] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190df988 00:23:13.225 [2024-12-15 19:43:59.978281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.225 [2024-12-15 19:43:59.978306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:13.225 [2024-12-15 19:43:59.986632] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f4f40 00:23:13.225 [2024-12-15 19:43:59.986911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.225 [2024-12-15 19:43:59.986930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:13.225 [2024-12-15 19:43:59.995637] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190eaab8 00:23:13.225 [2024-12-15 19:43:59.996277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.225 [2024-12-15 19:43:59.996307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:13.225 [2024-12-15 19:44:00.005656] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e8088 00:23:13.225 [2024-12-15 19:44:00.006142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.225 [2024-12-15 19:44:00.006171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:13.225 [2024-12-15 19:44:00.016020] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e84c0 00:23:13.225 [2024-12-15 19:44:00.016437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.225 [2024-12-15 19:44:00.016464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.225 [2024-12-15 19:44:00.026412] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e5a90 00:23:13.225 [2024-12-15 19:44:00.026922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.225 [2024-12-15 19:44:00.026955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:13.225 [2024-12-15 19:44:00.037072] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e6b70 00:23:13.225 [2024-12-15 19:44:00.037410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.225 [2024-12-15 19:44:00.037436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:13.225 [2024-12-15 19:44:00.046406] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f4f40 00:23:13.225 [2024-12-15 19:44:00.046772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.225 [2024-12-15 19:44:00.046799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:13.225 [2024-12-15 19:44:00.055882] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e3d08 00:23:13.225 [2024-12-15 19:44:00.056186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.225 [2024-12-15 19:44:00.056215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:13.225 [2024-12-15 19:44:00.065113] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e6fa8 00:23:13.225 [2024-12-15 19:44:00.065362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.225 [2024-12-15 19:44:00.065389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:13.225 [2024-12-15 19:44:00.076522] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190ef6a8 00:23:13.225 [2024-12-15 19:44:00.077552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.225 [2024-12-15 19:44:00.077581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:13.225 [2024-12-15 19:44:00.082890] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f1ca0 00:23:13.225 [2024-12-15 19:44:00.083853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.225 [2024-12-15 19:44:00.083882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:13.225 [2024-12-15 19:44:00.093595] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f4b08 00:23:13.225 [2024-12-15 19:44:00.094285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.225 [2024-12-15 19:44:00.094313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:13.225 [2024-12-15 19:44:00.101862] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e3d08 00:23:13.225 [2024-12-15 19:44:00.102709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.225 [2024-12-15 19:44:00.102738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:13.225 [2024-12-15 19:44:00.111585] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e7c50 00:23:13.225 [2024-12-15 19:44:00.112645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.225 [2024-12-15 19:44:00.112673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.485 [2024-12-15 19:44:00.121169] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190ee5c8 00:23:13.485 [2024-12-15 19:44:00.121568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.485 [2024-12-15 19:44:00.121593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.485 [2024-12-15 19:44:00.129394] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190eaef0 00:23:13.485 [2024-12-15 19:44:00.130086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.485 [2024-12-15 19:44:00.130115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:13.485 [2024-12-15 19:44:00.139118] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f4f40 00:23:13.485 [2024-12-15 19:44:00.139622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:9365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.485 [2024-12-15 19:44:00.139657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.485 [2024-12-15 19:44:00.147910] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190efae0 00:23:13.485 [2024-12-15 19:44:00.148386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.485 [2024-12-15 19:44:00.148412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:13.485 [2024-12-15 19:44:00.156518] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190eee38 00:23:13.485 [2024-12-15 19:44:00.157278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.485 [2024-12-15 19:44:00.157307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:13.485 [2024-12-15 19:44:00.165262] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f5378 00:23:13.485 [2024-12-15 19:44:00.165873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.485 [2024-12-15 19:44:00.165900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:13.485 [2024-12-15 19:44:00.173962] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190de8a8 00:23:13.485 [2024-12-15 19:44:00.174536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.485 [2024-12-15 19:44:00.174563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:13.485 [2024-12-15 19:44:00.182613] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e27f0 00:23:13.485 [2024-12-15 19:44:00.183181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.485 [2024-12-15 19:44:00.183211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:13.485 [2024-12-15 19:44:00.191263] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190df118 00:23:13.485 [2024-12-15 19:44:00.191812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.485 [2024-12-15 19:44:00.191850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:13.485 [2024-12-15 19:44:00.199921] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f7100 00:23:13.485 [2024-12-15 19:44:00.200485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.485 [2024-12-15 19:44:00.200513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:13.485 [2024-12-15 19:44:00.208273] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f6890 00:23:13.485 [2024-12-15 19:44:00.209044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.485 [2024-12-15 19:44:00.209078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:13.485 [2024-12-15 19:44:00.216876] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190ea680 00:23:13.485 [2024-12-15 19:44:00.218162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.485 [2024-12-15 19:44:00.218189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:13.485 [2024-12-15 19:44:00.225570] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190fb480 00:23:13.485 [2024-12-15 19:44:00.226960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.485 [2024-12-15 19:44:00.226988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:13.485 [2024-12-15 19:44:00.233681] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190fa7d8 00:23:13.485 [2024-12-15 19:44:00.234456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.485 [2024-12-15 19:44:00.234486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:13.485 [2024-12-15 19:44:00.245148] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190ebfd0 00:23:13.485 [2024-12-15 19:44:00.245863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.485 [2024-12-15 19:44:00.245891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:13.485 [2024-12-15 19:44:00.252913] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f46d0 00:23:13.485 [2024-12-15 19:44:00.253710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.485 [2024-12-15 19:44:00.253738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:13.485 [2024-12-15 19:44:00.261852] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e7c50 00:23:13.485 [2024-12-15 19:44:00.262183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.485 [2024-12-15 19:44:00.262207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:13.485 [2024-12-15 19:44:00.272680] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f8a50 00:23:13.485 [2024-12-15 19:44:00.273532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.485 [2024-12-15 19:44:00.273560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:13.485 [2024-12-15 19:44:00.279113] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190edd58 00:23:13.485 [2024-12-15 19:44:00.279237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.485 [2024-12-15 19:44:00.279255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:13.485 [2024-12-15 19:44:00.289854] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190dfdc0 00:23:13.485 [2024-12-15 19:44:00.290508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.485 [2024-12-15 19:44:00.290536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:13.485 [2024-12-15 19:44:00.298714] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190ee5c8 00:23:13.485 [2024-12-15 19:44:00.299433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.485 [2024-12-15 19:44:00.299460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.485 [2024-12-15 19:44:00.306363] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e3060 00:23:13.485 [2024-12-15 19:44:00.306662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.485 [2024-12-15 19:44:00.306686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:13.485 [2024-12-15 19:44:00.315241] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190fa3a0 00:23:13.485 [2024-12-15 19:44:00.315606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.485 [2024-12-15 19:44:00.315631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:13.485 [2024-12-15 19:44:00.324666] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f8e88 00:23:13.486 [2024-12-15 19:44:00.325718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.486 [2024-12-15 19:44:00.325746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:13.486 [2024-12-15 19:44:00.333537] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e7818 00:23:13.486 [2024-12-15 19:44:00.334015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.486 [2024-12-15 19:44:00.334040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:13.486 [2024-12-15 19:44:00.344279] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e9e10 00:23:13.486 [2024-12-15 19:44:00.345280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.486 [2024-12-15 19:44:00.345307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:13.486 [2024-12-15 19:44:00.350701] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190fd208 00:23:13.486 [2024-12-15 19:44:00.350985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.486 [2024-12-15 19:44:00.351009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:13.486 [2024-12-15 19:44:00.361343] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e3d08 00:23:13.486 [2024-12-15 19:44:00.362015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.486 [2024-12-15 19:44:00.362044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:13.486 [2024-12-15 19:44:00.368914] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f81e0 00:23:13.486 [2024-12-15 19:44:00.369650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.486 [2024-12-15 19:44:00.369678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:13.745 [2024-12-15 19:44:00.378848] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e5658 00:23:13.745 [2024-12-15 19:44:00.379489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.745 [2024-12-15 19:44:00.379515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:13.745 [2024-12-15 19:44:00.387832] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e3d08 00:23:13.745 [2024-12-15 19:44:00.389370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.745 [2024-12-15 19:44:00.389400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:13.745 [2024-12-15 19:44:00.396828] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190feb58 00:23:13.745 [2024-12-15 19:44:00.397587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.745 [2024-12-15 19:44:00.397614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:13.745 [2024-12-15 19:44:00.405173] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f1430 00:23:13.745 [2024-12-15 19:44:00.405594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.745 [2024-12-15 19:44:00.405625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:13.745 [2024-12-15 19:44:00.414612] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e0ea0 00:23:13.745 [2024-12-15 19:44:00.415104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.745 [2024-12-15 19:44:00.415129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:13.745 [2024-12-15 19:44:00.424062] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f7100 00:23:13.745 [2024-12-15 19:44:00.425215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.745 [2024-12-15 19:44:00.425243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.745 [2024-12-15 19:44:00.432944] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190fe2e8 00:23:13.745 [2024-12-15 19:44:00.433524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.745 [2024-12-15 19:44:00.433551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:13.745 [2024-12-15 19:44:00.441249] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f46d0 00:23:13.745 [2024-12-15 19:44:00.442186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.745 [2024-12-15 19:44:00.442213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:13.745 [2024-12-15 19:44:00.450127] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190ebb98 00:23:13.745 [2024-12-15 19:44:00.450503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.745 [2024-12-15 19:44:00.450527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:13.746 [2024-12-15 19:44:00.460978] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f6cc8 00:23:13.746 [2024-12-15 19:44:00.461891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.746 [2024-12-15 19:44:00.461919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:13.746 [2024-12-15 19:44:00.467450] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190fb480 00:23:13.746 [2024-12-15 19:44:00.467581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.746 [2024-12-15 19:44:00.467600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:13.746 [2024-12-15 19:44:00.477604] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190df988 00:23:13.746 [2024-12-15 19:44:00.478137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.746 [2024-12-15 19:44:00.478161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:13.746 [2024-12-15 19:44:00.486497] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f6890 00:23:13.746 [2024-12-15 19:44:00.487095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:14211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.746 [2024-12-15 19:44:00.487122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:13.746 [2024-12-15 19:44:00.494995] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190ed920 00:23:13.746 [2024-12-15 19:44:00.496142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.746 [2024-12-15 19:44:00.496169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:13.746 [2024-12-15 19:44:00.503977] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e6fa8 00:23:13.746 [2024-12-15 19:44:00.504740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.746 [2024-12-15 19:44:00.504767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:13.746 [2024-12-15 19:44:00.513225] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e0ea0 00:23:13.746 [2024-12-15 19:44:00.514398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.746 [2024-12-15 19:44:00.514425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:13.746 [2024-12-15 19:44:00.521042] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e0ea0 00:23:13.746 [2024-12-15 19:44:00.521810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.746 [2024-12-15 19:44:00.521846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:13.746 [2024-12-15 19:44:00.529733] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f8a50 00:23:13.746 [2024-12-15 19:44:00.530245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.746 [2024-12-15 19:44:00.530273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:13.746 [2024-12-15 19:44:00.538466] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190fc560 00:23:13.746 [2024-12-15 19:44:00.538961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.746 [2024-12-15 19:44:00.538985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:13.746 [2024-12-15 19:44:00.547145] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e3498 00:23:13.746 [2024-12-15 19:44:00.547630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.746 [2024-12-15 19:44:00.547664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:13.746 [2024-12-15 19:44:00.555803] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f7100 00:23:13.746 [2024-12-15 19:44:00.556254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.746 [2024-12-15 19:44:00.556280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:13.746 [2024-12-15 19:44:00.564450] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e7c50 00:23:13.746 [2024-12-15 19:44:00.564894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.746 [2024-12-15 19:44:00.564918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:13.746 [2024-12-15 19:44:00.573053] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190fbcf0 00:23:13.746 [2024-12-15 19:44:00.573539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.746 [2024-12-15 19:44:00.573564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:13.746 [2024-12-15 19:44:00.581996] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f0788 00:23:13.746 [2024-12-15 19:44:00.582758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.746 [2024-12-15 19:44:00.582791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:13.746 [2024-12-15 19:44:00.591938] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190eee38 00:23:13.746 [2024-12-15 19:44:00.592641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.746 [2024-12-15 19:44:00.592673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:13.746 [2024-12-15 19:44:00.599492] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190fb480 00:23:13.746 [2024-12-15 19:44:00.600346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.746 [2024-12-15 19:44:00.600373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:13.746 [2024-12-15 19:44:00.608325] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f1ca0 00:23:13.746 [2024-12-15 19:44:00.608688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.746 [2024-12-15 19:44:00.608712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:13.746 [2024-12-15 19:44:00.617168] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f81e0 00:23:13.746 [2024-12-15 19:44:00.617472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.746 [2024-12-15 19:44:00.617504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:13.746 [2024-12-15 19:44:00.626240] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e12d8 00:23:13.746 [2024-12-15 19:44:00.627146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.746 [2024-12-15 19:44:00.627174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:13.746 [2024-12-15 19:44:00.634330] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e6300 00:23:13.746 [2024-12-15 19:44:00.634494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.746 [2024-12-15 19:44:00.634512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:14.005 [2024-12-15 19:44:00.643231] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e7818 00:23:14.005 [2024-12-15 19:44:00.643540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.005 [2024-12-15 19:44:00.643564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:14.005 [2024-12-15 19:44:00.651913] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190ee5c8 00:23:14.005 [2024-12-15 19:44:00.652042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.005 [2024-12-15 19:44:00.652061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:14.005 [2024-12-15 19:44:00.663215] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190fb480 00:23:14.005 [2024-12-15 19:44:00.664219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.005 [2024-12-15 19:44:00.664246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:14.005 [2024-12-15 19:44:00.669665] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190eb760 00:23:14.006 [2024-12-15 19:44:00.669950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.006 [2024-12-15 19:44:00.669975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:14.006 [2024-12-15 19:44:00.678974] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f0788 00:23:14.006 [2024-12-15 19:44:00.680064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.006 [2024-12-15 19:44:00.680091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:14.006 [2024-12-15 19:44:00.687843] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190dece0 00:23:14.006 [2024-12-15 19:44:00.688754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.006 [2024-12-15 19:44:00.688780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:14.006 [2024-12-15 19:44:00.697041] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e1710 00:23:14.006 [2024-12-15 19:44:00.698019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.006 [2024-12-15 19:44:00.698046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:14.006 [2024-12-15 19:44:00.706276] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190fb048 00:23:14.006 [2024-12-15 19:44:00.707317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.006 [2024-12-15 19:44:00.707356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:14.006 [2024-12-15 19:44:00.716350] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e4578 00:23:14.006 [2024-12-15 19:44:00.716988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.006 [2024-12-15 19:44:00.717015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:14.006 [2024-12-15 19:44:00.725413] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f9b30 00:23:14.006 [2024-12-15 19:44:00.726075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.006 [2024-12-15 19:44:00.726102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:14.006 [2024-12-15 19:44:00.732978] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e9e10 00:23:14.006 [2024-12-15 19:44:00.733245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.006 [2024-12-15 19:44:00.733269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:14.006 [2024-12-15 19:44:00.743570] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190fb480 00:23:14.006 [2024-12-15 19:44:00.745060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.006 [2024-12-15 19:44:00.745087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:14.006 [2024-12-15 19:44:00.752149] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190de8a8 00:23:14.006 [2024-12-15 19:44:00.752841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.006 [2024-12-15 19:44:00.752867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.006 [2024-12-15 19:44:00.760413] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f46d0 00:23:14.006 [2024-12-15 19:44:00.761381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.006 [2024-12-15 19:44:00.761408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.006 [2024-12-15 19:44:00.768548] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190fcdd0 00:23:14.006 [2024-12-15 19:44:00.769527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.006 [2024-12-15 19:44:00.769554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:14.006 [2024-12-15 19:44:00.777162] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190fb8b8 00:23:14.006 [2024-12-15 19:44:00.778124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.006 [2024-12-15 19:44:00.778150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.006 [2024-12-15 19:44:00.785758] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e88f8 00:23:14.006 [2024-12-15 19:44:00.786720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.006 [2024-12-15 19:44:00.786748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:14.006 [2024-12-15 19:44:00.794384] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e3060 00:23:14.006 [2024-12-15 19:44:00.795317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:24262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.006 [2024-12-15 19:44:00.795347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:14.006 [2024-12-15 19:44:00.803010] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f0350 00:23:14.006 [2024-12-15 19:44:00.803751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:14776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.006 [2024-12-15 19:44:00.803779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:14.006 [2024-12-15 19:44:00.813424] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190eb760 00:23:14.006 [2024-12-15 19:44:00.814180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.006 [2024-12-15 19:44:00.814207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:14.006 [2024-12-15 19:44:00.821619] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f7da8 00:23:14.006 [2024-12-15 19:44:00.822777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.006 [2024-12-15 19:44:00.822806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:14.006 [2024-12-15 19:44:00.831022] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190ee190 00:23:14.006 [2024-12-15 19:44:00.831599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.006 [2024-12-15 19:44:00.831624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:14.006 [2024-12-15 19:44:00.841170] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e1710 00:23:14.006 [2024-12-15 19:44:00.841885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.006 [2024-12-15 19:44:00.841913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:14.006 [2024-12-15 19:44:00.850348] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190eb760 00:23:14.006 [2024-12-15 19:44:00.851047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.006 [2024-12-15 19:44:00.851074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:14.006 [2024-12-15 19:44:00.859516] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190f9f68 00:23:14.006 [2024-12-15 19:44:00.860242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.006 [2024-12-15 19:44:00.860270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:14.006 [2024-12-15 19:44:00.868663] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caa00) with pdu=0x2000190e7c50 00:23:14.006 [2024-12-15 19:44:00.869349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.006 [2024-12-15 19:44:00.869376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:14.006 00:23:14.006 Latency(us) 00:23:14.006 [2024-12-15T19:44:00.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.006 [2024-12-15T19:44:00.902Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:14.006 nvme0n1 : 2.00 27921.82 109.07 0.00 0.00 4579.29 1817.13 12094.37 00:23:14.006 [2024-12-15T19:44:00.902Z] =================================================================================================================== 00:23:14.006 [2024-12-15T19:44:00.902Z] Total : 27921.82 109.07 0.00 0.00 4579.29 1817.13 12094.37 00:23:14.006 0 00:23:14.006 19:44:00 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:14.006 19:44:00 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:14.006 19:44:00 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:14.006 19:44:00 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:14.006 | .driver_specific 00:23:14.006 | .nvme_error 00:23:14.006 | .status_code 00:23:14.006 | .command_transient_transport_error' 00:23:14.574 19:44:01 -- host/digest.sh@71 -- # (( 219 > 0 )) 00:23:14.574 19:44:01 -- host/digest.sh@73 -- # killprocess 97799 00:23:14.574 19:44:01 -- common/autotest_common.sh@936 -- # '[' -z 97799 ']' 00:23:14.574 19:44:01 -- common/autotest_common.sh@940 -- # kill -0 97799 00:23:14.574 19:44:01 -- common/autotest_common.sh@941 -- # uname 00:23:14.574 19:44:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:14.574 19:44:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97799 00:23:14.574 19:44:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:14.574 19:44:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:14.574 killing process with pid 97799 00:23:14.574 19:44:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97799' 00:23:14.574 Received shutdown signal, test time was about 2.000000 seconds 00:23:14.574 00:23:14.574 Latency(us) 00:23:14.574 [2024-12-15T19:44:01.470Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.574 [2024-12-15T19:44:01.470Z] =================================================================================================================== 00:23:14.574 [2024-12-15T19:44:01.470Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:14.574 19:44:01 -- common/autotest_common.sh@955 -- # kill 97799 00:23:14.574 19:44:01 -- common/autotest_common.sh@960 -- # wait 97799 00:23:14.833 19:44:01 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:23:14.833 19:44:01 -- host/digest.sh@54 -- # local rw bs qd 00:23:14.833 19:44:01 -- host/digest.sh@56 -- # rw=randwrite 00:23:14.833 19:44:01 -- host/digest.sh@56 -- # bs=131072 00:23:14.833 19:44:01 -- host/digest.sh@56 -- # qd=16 00:23:14.833 19:44:01 -- host/digest.sh@58 -- # bperfpid=97885 00:23:14.833 19:44:01 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:23:14.833 19:44:01 -- host/digest.sh@60 -- # waitforlisten 97885 /var/tmp/bperf.sock 00:23:14.833 19:44:01 -- common/autotest_common.sh@829 -- # '[' -z 97885 ']' 00:23:14.833 19:44:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:14.833 19:44:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:14.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:14.833 19:44:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:14.833 19:44:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:14.833 19:44:01 -- common/autotest_common.sh@10 -- # set +x 00:23:14.833 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:14.833 Zero copy mechanism will not be used. 00:23:14.833 [2024-12-15 19:44:01.574703] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:14.833 [2024-12-15 19:44:01.574839] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97885 ] 00:23:14.833 [2024-12-15 19:44:01.713566] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.092 [2024-12-15 19:44:01.800340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:16.027 19:44:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:16.027 19:44:02 -- common/autotest_common.sh@862 -- # return 0 00:23:16.027 19:44:02 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:16.027 19:44:02 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:16.027 19:44:02 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:16.027 19:44:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.027 19:44:02 -- common/autotest_common.sh@10 -- # set +x 00:23:16.027 19:44:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.027 19:44:02 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:16.027 19:44:02 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:16.594 nvme0n1 00:23:16.594 19:44:03 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:16.594 19:44:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.594 19:44:03 -- common/autotest_common.sh@10 -- # set +x 00:23:16.594 19:44:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.594 19:44:03 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:16.594 19:44:03 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:16.594 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:16.594 Zero copy mechanism will not be used. 00:23:16.594 Running I/O for 2 seconds... 00:23:16.594 [2024-12-15 19:44:03.374766] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.594 [2024-12-15 19:44:03.375021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.594 [2024-12-15 19:44:03.375051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:16.594 [2024-12-15 19:44:03.378687] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.594 [2024-12-15 19:44:03.378829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.594 [2024-12-15 19:44:03.378857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:16.594 [2024-12-15 19:44:03.382538] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.594 [2024-12-15 19:44:03.382640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.594 [2024-12-15 19:44:03.382662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:16.594 [2024-12-15 19:44:03.386316] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.594 [2024-12-15 19:44:03.386464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.594 [2024-12-15 19:44:03.386486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.594 [2024-12-15 19:44:03.390171] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.594 [2024-12-15 19:44:03.390245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.594 [2024-12-15 19:44:03.390266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:16.594 [2024-12-15 19:44:03.394037] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.594 [2024-12-15 19:44:03.394116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.594 [2024-12-15 19:44:03.394137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:16.594 [2024-12-15 19:44:03.398127] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.594 [2024-12-15 19:44:03.398236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.594 [2024-12-15 19:44:03.398256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:16.594 [2024-12-15 19:44:03.402092] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.594 [2024-12-15 19:44:03.402265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.594 [2024-12-15 19:44:03.402286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.595 [2024-12-15 19:44:03.405955] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.595 [2024-12-15 19:44:03.406164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.595 [2024-12-15 19:44:03.406184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:16.595 [2024-12-15 19:44:03.409773] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.595 [2024-12-15 19:44:03.409917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.595 [2024-12-15 19:44:03.409937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:16.595 [2024-12-15 19:44:03.413666] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.595 [2024-12-15 19:44:03.413774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.595 [2024-12-15 19:44:03.413794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:16.595 [2024-12-15 19:44:03.417525] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.595 [2024-12-15 19:44:03.417601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.595 [2024-12-15 19:44:03.417622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.595 [2024-12-15 19:44:03.421374] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.595 [2024-12-15 19:44:03.421469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.595 [2024-12-15 19:44:03.421489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:16.595 [2024-12-15 19:44:03.425201] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.595 [2024-12-15 19:44:03.425309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.595 [2024-12-15 19:44:03.425330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:16.595 [2024-12-15 19:44:03.429073] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.595 [2024-12-15 19:44:03.429175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.595 [2024-12-15 19:44:03.429200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:16.595 [2024-12-15 19:44:03.432918] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.595 [2024-12-15 19:44:03.433108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.595 [2024-12-15 19:44:03.433128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.595 [2024-12-15 19:44:03.436788] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.595 [2024-12-15 19:44:03.436988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.595 [2024-12-15 19:44:03.437009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:16.595 [2024-12-15 19:44:03.440675] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.595 [2024-12-15 19:44:03.440816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.595 [2024-12-15 19:44:03.440836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:16.595 [2024-12-15 19:44:03.444514] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.595 [2024-12-15 19:44:03.444639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.595 [2024-12-15 19:44:03.444660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:16.595 [2024-12-15 19:44:03.448386] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.595 [2024-12-15 19:44:03.448486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.595 [2024-12-15 19:44:03.448507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.595 [2024-12-15 19:44:03.452215] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.595 [2024-12-15 19:44:03.452323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.595 [2024-12-15 19:44:03.452344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:16.595 [2024-12-15 19:44:03.456130] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.595 [2024-12-15 19:44:03.456267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.595 [2024-12-15 19:44:03.456288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:16.595 [2024-12-15 19:44:03.460044] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.595 [2024-12-15 19:44:03.460192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.595 [2024-12-15 19:44:03.460213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:16.595 [2024-12-15 19:44:03.463952] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.595 [2024-12-15 19:44:03.464135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.595 [2024-12-15 19:44:03.464156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.595 [2024-12-15 19:44:03.467755] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.595 [2024-12-15 19:44:03.467953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.595 [2024-12-15 19:44:03.467974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:16.595 [2024-12-15 19:44:03.471661] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.595 [2024-12-15 19:44:03.471826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.595 [2024-12-15 19:44:03.471862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:16.595 [2024-12-15 19:44:03.475545] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.595 [2024-12-15 19:44:03.475626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.595 [2024-12-15 19:44:03.475647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:16.595 [2024-12-15 19:44:03.479449] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.595 [2024-12-15 19:44:03.479532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.595 [2024-12-15 19:44:03.479552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.595 [2024-12-15 19:44:03.483316] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.595 [2024-12-15 19:44:03.483404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.595 [2024-12-15 19:44:03.483441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:16.595 [2024-12-15 19:44:03.487191] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.595 [2024-12-15 19:44:03.487319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.595 [2024-12-15 19:44:03.487341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:16.855 [2024-12-15 19:44:03.491031] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.855 [2024-12-15 19:44:03.491144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.855 [2024-12-15 19:44:03.491164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:16.855 [2024-12-15 19:44:03.495050] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.856 [2024-12-15 19:44:03.495232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.856 [2024-12-15 19:44:03.495254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.856 [2024-12-15 19:44:03.498888] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.856 [2024-12-15 19:44:03.499068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.856 [2024-12-15 19:44:03.499105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:16.856 [2024-12-15 19:44:03.502755] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.856 [2024-12-15 19:44:03.502923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.856 [2024-12-15 19:44:03.502944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:16.856 [2024-12-15 19:44:03.506571] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.856 [2024-12-15 19:44:03.506665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.856 [2024-12-15 19:44:03.506702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:16.856 [2024-12-15 19:44:03.510463] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.856 [2024-12-15 19:44:03.510546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.856 [2024-12-15 19:44:03.510567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.856 [2024-12-15 19:44:03.514263] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.856 [2024-12-15 19:44:03.514349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.856 [2024-12-15 19:44:03.514387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:16.856 [2024-12-15 19:44:03.518224] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.856 [2024-12-15 19:44:03.518363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.856 [2024-12-15 19:44:03.518385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:16.856 [2024-12-15 19:44:03.522070] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.856 [2024-12-15 19:44:03.522207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.856 [2024-12-15 19:44:03.522228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:16.856 [2024-12-15 19:44:03.525978] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.856 [2024-12-15 19:44:03.526170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.856 [2024-12-15 19:44:03.526191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.856 [2024-12-15 19:44:03.529728] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.856 [2024-12-15 19:44:03.529933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.856 [2024-12-15 19:44:03.529954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:16.856 [2024-12-15 19:44:03.533600] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.856 [2024-12-15 19:44:03.533731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.856 [2024-12-15 19:44:03.533752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:16.856 [2024-12-15 19:44:03.537420] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.856 [2024-12-15 19:44:03.537525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.856 [2024-12-15 19:44:03.537546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:16.856 [2024-12-15 19:44:03.541291] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.856 [2024-12-15 19:44:03.541384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.856 [2024-12-15 19:44:03.541405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.856 [2024-12-15 19:44:03.545053] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.856 [2024-12-15 19:44:03.545128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.856 [2024-12-15 19:44:03.545149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:16.856 [2024-12-15 19:44:03.548756] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.856 [2024-12-15 19:44:03.548900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.856 [2024-12-15 19:44:03.548922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:16.856 [2024-12-15 19:44:03.552564] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.856 [2024-12-15 19:44:03.552678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.856 [2024-12-15 19:44:03.552698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:16.856 [2024-12-15 19:44:03.556395] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.856 [2024-12-15 19:44:03.556580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.856 [2024-12-15 19:44:03.556600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.856 [2024-12-15 19:44:03.560248] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.856 [2024-12-15 19:44:03.560443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.856 [2024-12-15 19:44:03.560463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:16.856 [2024-12-15 19:44:03.564006] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.856 [2024-12-15 19:44:03.564167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.856 [2024-12-15 19:44:03.564187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:16.856 [2024-12-15 19:44:03.567636] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.856 [2024-12-15 19:44:03.567708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.856 [2024-12-15 19:44:03.567728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:16.856 [2024-12-15 19:44:03.571431] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.856 [2024-12-15 19:44:03.571502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.856 [2024-12-15 19:44:03.571522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.856 [2024-12-15 19:44:03.575167] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.856 [2024-12-15 19:44:03.575248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.856 [2024-12-15 19:44:03.575269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:16.856 [2024-12-15 19:44:03.579015] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.856 [2024-12-15 19:44:03.579149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.856 [2024-12-15 19:44:03.579170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:16.856 [2024-12-15 19:44:03.582807] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.856 [2024-12-15 19:44:03.582948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.856 [2024-12-15 19:44:03.582969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:16.856 [2024-12-15 19:44:03.586586] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.856 [2024-12-15 19:44:03.586780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.856 [2024-12-15 19:44:03.586800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.856 [2024-12-15 19:44:03.590370] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.856 [2024-12-15 19:44:03.590540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.856 [2024-12-15 19:44:03.590561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:16.856 [2024-12-15 19:44:03.594260] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.856 [2024-12-15 19:44:03.594437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.856 [2024-12-15 19:44:03.594458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:16.856 [2024-12-15 19:44:03.598079] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.856 [2024-12-15 19:44:03.598182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.857 [2024-12-15 19:44:03.598202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:16.857 [2024-12-15 19:44:03.601720] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.857 [2024-12-15 19:44:03.601794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.857 [2024-12-15 19:44:03.601827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.857 [2024-12-15 19:44:03.605497] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.857 [2024-12-15 19:44:03.605569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.857 [2024-12-15 19:44:03.605589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:16.857 [2024-12-15 19:44:03.609293] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.857 [2024-12-15 19:44:03.609413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.857 [2024-12-15 19:44:03.609432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:16.857 [2024-12-15 19:44:03.612983] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.857 [2024-12-15 19:44:03.613115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.857 [2024-12-15 19:44:03.613135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:16.857 [2024-12-15 19:44:03.616712] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.857 [2024-12-15 19:44:03.616912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.857 [2024-12-15 19:44:03.616933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.857 [2024-12-15 19:44:03.620526] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.857 [2024-12-15 19:44:03.620702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.857 [2024-12-15 19:44:03.620722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:16.857 [2024-12-15 19:44:03.624346] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.857 [2024-12-15 19:44:03.624493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.857 [2024-12-15 19:44:03.624514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:16.857 [2024-12-15 19:44:03.628191] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.857 [2024-12-15 19:44:03.628270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.857 [2024-12-15 19:44:03.628291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:16.857 [2024-12-15 19:44:03.631979] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.857 [2024-12-15 19:44:03.632066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.857 [2024-12-15 19:44:03.632087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.857 [2024-12-15 19:44:03.635780] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.857 [2024-12-15 19:44:03.635868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.857 [2024-12-15 19:44:03.635888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:16.857 [2024-12-15 19:44:03.639584] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.857 [2024-12-15 19:44:03.639719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.857 [2024-12-15 19:44:03.639740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:16.857 [2024-12-15 19:44:03.643471] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.857 [2024-12-15 19:44:03.643603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.857 [2024-12-15 19:44:03.643624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:16.857 [2024-12-15 19:44:03.647468] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.857 [2024-12-15 19:44:03.647639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.857 [2024-12-15 19:44:03.647660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.857 [2024-12-15 19:44:03.651356] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.857 [2024-12-15 19:44:03.651574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.857 [2024-12-15 19:44:03.651595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:16.857 [2024-12-15 19:44:03.655126] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.857 [2024-12-15 19:44:03.655273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.857 [2024-12-15 19:44:03.655294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:16.857 [2024-12-15 19:44:03.658915] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.857 [2024-12-15 19:44:03.658995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.857 [2024-12-15 19:44:03.659015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:16.857 [2024-12-15 19:44:03.662577] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.857 [2024-12-15 19:44:03.662671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.857 [2024-12-15 19:44:03.662690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.857 [2024-12-15 19:44:03.666419] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.857 [2024-12-15 19:44:03.666497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.857 [2024-12-15 19:44:03.666518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:16.857 [2024-12-15 19:44:03.670191] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.857 [2024-12-15 19:44:03.670329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.857 [2024-12-15 19:44:03.670359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:16.857 [2024-12-15 19:44:03.673925] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.857 [2024-12-15 19:44:03.674084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.857 [2024-12-15 19:44:03.674105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:16.857 [2024-12-15 19:44:03.677723] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.857 [2024-12-15 19:44:03.677925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.857 [2024-12-15 19:44:03.677945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.857 [2024-12-15 19:44:03.681529] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.857 [2024-12-15 19:44:03.681752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.857 [2024-12-15 19:44:03.681795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:16.857 [2024-12-15 19:44:03.685322] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.857 [2024-12-15 19:44:03.685473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.857 [2024-12-15 19:44:03.685493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:16.857 [2024-12-15 19:44:03.689077] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.857 [2024-12-15 19:44:03.689174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.857 [2024-12-15 19:44:03.689194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:16.857 [2024-12-15 19:44:03.692872] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.858 [2024-12-15 19:44:03.692946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.858 [2024-12-15 19:44:03.692967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.858 [2024-12-15 19:44:03.696562] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.858 [2024-12-15 19:44:03.696637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.858 [2024-12-15 19:44:03.696658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:16.858 [2024-12-15 19:44:03.700492] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.858 [2024-12-15 19:44:03.700624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.858 [2024-12-15 19:44:03.700644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:16.858 [2024-12-15 19:44:03.704498] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.858 [2024-12-15 19:44:03.704613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.858 [2024-12-15 19:44:03.704633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:16.858 [2024-12-15 19:44:03.708302] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.858 [2024-12-15 19:44:03.708491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.858 [2024-12-15 19:44:03.708522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.858 [2024-12-15 19:44:03.712137] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.858 [2024-12-15 19:44:03.712305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.858 [2024-12-15 19:44:03.712324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:16.858 [2024-12-15 19:44:03.716010] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.858 [2024-12-15 19:44:03.716174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.858 [2024-12-15 19:44:03.716196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:16.858 [2024-12-15 19:44:03.719785] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.858 [2024-12-15 19:44:03.719896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.858 [2024-12-15 19:44:03.719917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:16.858 [2024-12-15 19:44:03.723590] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.858 [2024-12-15 19:44:03.723676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.858 [2024-12-15 19:44:03.723696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.858 [2024-12-15 19:44:03.727378] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.858 [2024-12-15 19:44:03.727477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.858 [2024-12-15 19:44:03.727513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:16.858 [2024-12-15 19:44:03.731226] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.858 [2024-12-15 19:44:03.731350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.858 [2024-12-15 19:44:03.731371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:16.858 [2024-12-15 19:44:03.735123] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.858 [2024-12-15 19:44:03.735318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.858 [2024-12-15 19:44:03.735338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:16.858 [2024-12-15 19:44:03.739045] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.858 [2024-12-15 19:44:03.739214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.858 [2024-12-15 19:44:03.739234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.858 [2024-12-15 19:44:03.742869] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.858 [2024-12-15 19:44:03.743020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.858 [2024-12-15 19:44:03.743040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:16.858 [2024-12-15 19:44:03.746697] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:16.858 [2024-12-15 19:44:03.746859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.858 [2024-12-15 19:44:03.746879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.118 [2024-12-15 19:44:03.750492] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.118 [2024-12-15 19:44:03.750572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.118 [2024-12-15 19:44:03.750592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.118 [2024-12-15 19:44:03.754289] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.118 [2024-12-15 19:44:03.754385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.118 [2024-12-15 19:44:03.754405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.118 [2024-12-15 19:44:03.758177] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.118 [2024-12-15 19:44:03.758266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.118 [2024-12-15 19:44:03.758286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.118 [2024-12-15 19:44:03.762103] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.118 [2024-12-15 19:44:03.762229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.118 [2024-12-15 19:44:03.762249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.118 [2024-12-15 19:44:03.765940] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.118 [2024-12-15 19:44:03.766053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.118 [2024-12-15 19:44:03.766073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.118 [2024-12-15 19:44:03.769818] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.118 [2024-12-15 19:44:03.770018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.118 [2024-12-15 19:44:03.770038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.118 [2024-12-15 19:44:03.773541] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.118 [2024-12-15 19:44:03.773724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.119 [2024-12-15 19:44:03.773744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.119 [2024-12-15 19:44:03.777412] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.119 [2024-12-15 19:44:03.777572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.119 [2024-12-15 19:44:03.777591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.119 [2024-12-15 19:44:03.781167] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.119 [2024-12-15 19:44:03.781275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.119 [2024-12-15 19:44:03.781296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.119 [2024-12-15 19:44:03.784959] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.119 [2024-12-15 19:44:03.785044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.119 [2024-12-15 19:44:03.785065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.119 [2024-12-15 19:44:03.788683] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.119 [2024-12-15 19:44:03.788771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.119 [2024-12-15 19:44:03.788791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.119 [2024-12-15 19:44:03.792601] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.119 [2024-12-15 19:44:03.792721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.119 [2024-12-15 19:44:03.792741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.119 [2024-12-15 19:44:03.796440] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.119 [2024-12-15 19:44:03.796563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.119 [2024-12-15 19:44:03.796584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.119 [2024-12-15 19:44:03.800420] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.119 [2024-12-15 19:44:03.800598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.119 [2024-12-15 19:44:03.800620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.119 [2024-12-15 19:44:03.804274] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.119 [2024-12-15 19:44:03.804445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.119 [2024-12-15 19:44:03.804464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.119 [2024-12-15 19:44:03.808047] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.119 [2024-12-15 19:44:03.808193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.119 [2024-12-15 19:44:03.808214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.119 [2024-12-15 19:44:03.811760] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.119 [2024-12-15 19:44:03.811834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.119 [2024-12-15 19:44:03.811854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.119 [2024-12-15 19:44:03.815461] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.119 [2024-12-15 19:44:03.815540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.119 [2024-12-15 19:44:03.815560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.119 [2024-12-15 19:44:03.819239] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.119 [2024-12-15 19:44:03.819327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.119 [2024-12-15 19:44:03.819348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.119 [2024-12-15 19:44:03.823108] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.119 [2024-12-15 19:44:03.823238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.119 [2024-12-15 19:44:03.823259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.119 [2024-12-15 19:44:03.826873] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.119 [2024-12-15 19:44:03.826988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.119 [2024-12-15 19:44:03.827009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.119 [2024-12-15 19:44:03.830687] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.119 [2024-12-15 19:44:03.830885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.119 [2024-12-15 19:44:03.830906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.119 [2024-12-15 19:44:03.834557] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.119 [2024-12-15 19:44:03.834759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.119 [2024-12-15 19:44:03.834779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.119 [2024-12-15 19:44:03.838438] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.119 [2024-12-15 19:44:03.838584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.119 [2024-12-15 19:44:03.838604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.119 [2024-12-15 19:44:03.842264] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.119 [2024-12-15 19:44:03.842391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.119 [2024-12-15 19:44:03.842412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.119 [2024-12-15 19:44:03.846160] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.119 [2024-12-15 19:44:03.846254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.119 [2024-12-15 19:44:03.846275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.119 [2024-12-15 19:44:03.849912] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.119 [2024-12-15 19:44:03.849984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.119 [2024-12-15 19:44:03.850004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.119 [2024-12-15 19:44:03.853669] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.119 [2024-12-15 19:44:03.853799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.119 [2024-12-15 19:44:03.853819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.119 [2024-12-15 19:44:03.857497] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.119 [2024-12-15 19:44:03.857637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.119 [2024-12-15 19:44:03.857657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.119 [2024-12-15 19:44:03.861358] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.119 [2024-12-15 19:44:03.861530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.119 [2024-12-15 19:44:03.861551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.119 [2024-12-15 19:44:03.865138] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.119 [2024-12-15 19:44:03.865346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.119 [2024-12-15 19:44:03.865366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.119 [2024-12-15 19:44:03.868879] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.119 [2024-12-15 19:44:03.869039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.119 [2024-12-15 19:44:03.869060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.119 [2024-12-15 19:44:03.872658] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.119 [2024-12-15 19:44:03.872756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.120 [2024-12-15 19:44:03.872776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.120 [2024-12-15 19:44:03.876426] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.120 [2024-12-15 19:44:03.876510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.120 [2024-12-15 19:44:03.876530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.120 [2024-12-15 19:44:03.880250] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.120 [2024-12-15 19:44:03.880340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.120 [2024-12-15 19:44:03.880360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.120 [2024-12-15 19:44:03.884023] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.120 [2024-12-15 19:44:03.884153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.120 [2024-12-15 19:44:03.884174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.120 [2024-12-15 19:44:03.887785] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.120 [2024-12-15 19:44:03.887918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.120 [2024-12-15 19:44:03.887939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.120 [2024-12-15 19:44:03.891642] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.120 [2024-12-15 19:44:03.891821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.120 [2024-12-15 19:44:03.891842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.120 [2024-12-15 19:44:03.895461] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.120 [2024-12-15 19:44:03.895672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.120 [2024-12-15 19:44:03.895692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.120 [2024-12-15 19:44:03.899253] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.120 [2024-12-15 19:44:03.899403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.120 [2024-12-15 19:44:03.899423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.120 [2024-12-15 19:44:03.903037] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.120 [2024-12-15 19:44:03.903135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.120 [2024-12-15 19:44:03.903156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.120 [2024-12-15 19:44:03.906880] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.120 [2024-12-15 19:44:03.906980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.120 [2024-12-15 19:44:03.906999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.120 [2024-12-15 19:44:03.910638] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.120 [2024-12-15 19:44:03.910747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.120 [2024-12-15 19:44:03.910767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.120 [2024-12-15 19:44:03.914506] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.120 [2024-12-15 19:44:03.914657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.120 [2024-12-15 19:44:03.914677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.120 [2024-12-15 19:44:03.918328] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.120 [2024-12-15 19:44:03.918460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.120 [2024-12-15 19:44:03.918481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.120 [2024-12-15 19:44:03.922239] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.120 [2024-12-15 19:44:03.922426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.120 [2024-12-15 19:44:03.922447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.120 [2024-12-15 19:44:03.926103] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.120 [2024-12-15 19:44:03.926275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.120 [2024-12-15 19:44:03.926295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.120 [2024-12-15 19:44:03.929905] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.120 [2024-12-15 19:44:03.930057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.120 [2024-12-15 19:44:03.930077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.120 [2024-12-15 19:44:03.933742] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.120 [2024-12-15 19:44:03.933846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.120 [2024-12-15 19:44:03.933880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.120 [2024-12-15 19:44:03.937574] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.120 [2024-12-15 19:44:03.937652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.120 [2024-12-15 19:44:03.937671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.120 [2024-12-15 19:44:03.941355] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.120 [2024-12-15 19:44:03.941430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.120 [2024-12-15 19:44:03.941450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.120 [2024-12-15 19:44:03.945269] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.120 [2024-12-15 19:44:03.945411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.120 [2024-12-15 19:44:03.945432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.120 [2024-12-15 19:44:03.949072] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.120 [2024-12-15 19:44:03.949191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.120 [2024-12-15 19:44:03.949211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.120 [2024-12-15 19:44:03.952916] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.120 [2024-12-15 19:44:03.953098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.120 [2024-12-15 19:44:03.953119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.120 [2024-12-15 19:44:03.956749] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.120 [2024-12-15 19:44:03.956926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.120 [2024-12-15 19:44:03.956946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.120 [2024-12-15 19:44:03.960469] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.120 [2024-12-15 19:44:03.960621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.120 [2024-12-15 19:44:03.960641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.120 [2024-12-15 19:44:03.964317] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.120 [2024-12-15 19:44:03.964393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.120 [2024-12-15 19:44:03.964428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.120 [2024-12-15 19:44:03.968151] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.120 [2024-12-15 19:44:03.968252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.120 [2024-12-15 19:44:03.968273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.120 [2024-12-15 19:44:03.971904] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.120 [2024-12-15 19:44:03.971988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.120 [2024-12-15 19:44:03.972011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.120 [2024-12-15 19:44:03.975727] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.120 [2024-12-15 19:44:03.975853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.120 [2024-12-15 19:44:03.975887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.120 [2024-12-15 19:44:03.979581] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.121 [2024-12-15 19:44:03.979695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.121 [2024-12-15 19:44:03.979715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.121 [2024-12-15 19:44:03.983495] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.121 [2024-12-15 19:44:03.983673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.121 [2024-12-15 19:44:03.983693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.121 [2024-12-15 19:44:03.987318] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.121 [2024-12-15 19:44:03.987559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.121 [2024-12-15 19:44:03.987579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.121 [2024-12-15 19:44:03.991221] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.121 [2024-12-15 19:44:03.991363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.121 [2024-12-15 19:44:03.991383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.121 [2024-12-15 19:44:03.995255] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.121 [2024-12-15 19:44:03.995348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.121 [2024-12-15 19:44:03.995368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.121 [2024-12-15 19:44:03.999167] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.121 [2024-12-15 19:44:03.999259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.121 [2024-12-15 19:44:03.999280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.121 [2024-12-15 19:44:04.003170] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.121 [2024-12-15 19:44:04.003276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.121 [2024-12-15 19:44:04.003295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.121 [2024-12-15 19:44:04.007135] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.121 [2024-12-15 19:44:04.007287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.121 [2024-12-15 19:44:04.007306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.121 [2024-12-15 19:44:04.011010] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.121 [2024-12-15 19:44:04.011126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.121 [2024-12-15 19:44:04.011146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.381 [2024-12-15 19:44:04.014911] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.381 [2024-12-15 19:44:04.015090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.381 [2024-12-15 19:44:04.015114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.381 [2024-12-15 19:44:04.018658] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.381 [2024-12-15 19:44:04.018842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.381 [2024-12-15 19:44:04.018862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.381 [2024-12-15 19:44:04.022501] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.381 [2024-12-15 19:44:04.022654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.381 [2024-12-15 19:44:04.022675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.381 [2024-12-15 19:44:04.026372] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.381 [2024-12-15 19:44:04.026450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.381 [2024-12-15 19:44:04.026471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.381 [2024-12-15 19:44:04.030185] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.381 [2024-12-15 19:44:04.030271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.381 [2024-12-15 19:44:04.030291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.381 [2024-12-15 19:44:04.033986] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.381 [2024-12-15 19:44:04.034100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.381 [2024-12-15 19:44:04.034121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.381 [2024-12-15 19:44:04.037761] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.381 [2024-12-15 19:44:04.037897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.381 [2024-12-15 19:44:04.037917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.381 [2024-12-15 19:44:04.041549] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.381 [2024-12-15 19:44:04.041638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.381 [2024-12-15 19:44:04.041658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.381 [2024-12-15 19:44:04.045401] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.381 [2024-12-15 19:44:04.045572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.381 [2024-12-15 19:44:04.045592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.381 [2024-12-15 19:44:04.049188] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.381 [2024-12-15 19:44:04.049380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.381 [2024-12-15 19:44:04.049400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.381 [2024-12-15 19:44:04.052897] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.381 [2024-12-15 19:44:04.053045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.381 [2024-12-15 19:44:04.053065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.381 [2024-12-15 19:44:04.056700] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.381 [2024-12-15 19:44:04.056788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.381 [2024-12-15 19:44:04.056808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.381 [2024-12-15 19:44:04.060504] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.382 [2024-12-15 19:44:04.060619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.382 [2024-12-15 19:44:04.060640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.382 [2024-12-15 19:44:04.064360] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.382 [2024-12-15 19:44:04.064457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.382 [2024-12-15 19:44:04.064478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.382 [2024-12-15 19:44:04.068252] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.382 [2024-12-15 19:44:04.068379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.382 [2024-12-15 19:44:04.068399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.382 [2024-12-15 19:44:04.072073] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.382 [2024-12-15 19:44:04.072189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.382 [2024-12-15 19:44:04.072209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.382 [2024-12-15 19:44:04.075912] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.382 [2024-12-15 19:44:04.076100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.382 [2024-12-15 19:44:04.076120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.382 [2024-12-15 19:44:04.079646] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.382 [2024-12-15 19:44:04.079848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.382 [2024-12-15 19:44:04.079867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.382 [2024-12-15 19:44:04.083473] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.382 [2024-12-15 19:44:04.083628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.382 [2024-12-15 19:44:04.083648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.382 [2024-12-15 19:44:04.087365] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.382 [2024-12-15 19:44:04.087459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.382 [2024-12-15 19:44:04.087479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.382 [2024-12-15 19:44:04.091157] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.382 [2024-12-15 19:44:04.091264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.382 [2024-12-15 19:44:04.091283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.382 [2024-12-15 19:44:04.094887] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.382 [2024-12-15 19:44:04.094966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.382 [2024-12-15 19:44:04.094986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.382 [2024-12-15 19:44:04.098720] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.382 [2024-12-15 19:44:04.098861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.382 [2024-12-15 19:44:04.098881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.382 [2024-12-15 19:44:04.102447] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.382 [2024-12-15 19:44:04.102578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.382 [2024-12-15 19:44:04.102598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.382 [2024-12-15 19:44:04.106327] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.382 [2024-12-15 19:44:04.106527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.382 [2024-12-15 19:44:04.106547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.382 [2024-12-15 19:44:04.110217] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.382 [2024-12-15 19:44:04.110419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.382 [2024-12-15 19:44:04.110439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.382 [2024-12-15 19:44:04.114038] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.382 [2024-12-15 19:44:04.114195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.382 [2024-12-15 19:44:04.114216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.382 [2024-12-15 19:44:04.117831] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.382 [2024-12-15 19:44:04.117926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.382 [2024-12-15 19:44:04.117946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.382 [2024-12-15 19:44:04.121578] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.382 [2024-12-15 19:44:04.121668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.382 [2024-12-15 19:44:04.121705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.382 [2024-12-15 19:44:04.125307] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.382 [2024-12-15 19:44:04.125384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.382 [2024-12-15 19:44:04.125404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.382 [2024-12-15 19:44:04.129113] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.382 [2024-12-15 19:44:04.129260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.382 [2024-12-15 19:44:04.129281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.382 [2024-12-15 19:44:04.132910] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.382 [2024-12-15 19:44:04.133009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.382 [2024-12-15 19:44:04.133029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.382 [2024-12-15 19:44:04.136687] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.382 [2024-12-15 19:44:04.136908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.382 [2024-12-15 19:44:04.136931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.382 [2024-12-15 19:44:04.140574] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.382 [2024-12-15 19:44:04.140743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.382 [2024-12-15 19:44:04.140763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.382 [2024-12-15 19:44:04.144365] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.382 [2024-12-15 19:44:04.144518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.382 [2024-12-15 19:44:04.144538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.382 [2024-12-15 19:44:04.148142] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.382 [2024-12-15 19:44:04.148226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.382 [2024-12-15 19:44:04.148246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.382 [2024-12-15 19:44:04.151880] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.382 [2024-12-15 19:44:04.151973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.382 [2024-12-15 19:44:04.151993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.382 [2024-12-15 19:44:04.155549] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.382 [2024-12-15 19:44:04.155623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.382 [2024-12-15 19:44:04.155642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.382 [2024-12-15 19:44:04.159396] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.383 [2024-12-15 19:44:04.159523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.383 [2024-12-15 19:44:04.159543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.383 [2024-12-15 19:44:04.163171] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.383 [2024-12-15 19:44:04.163295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.383 [2024-12-15 19:44:04.163331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.383 [2024-12-15 19:44:04.167077] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.383 [2024-12-15 19:44:04.167275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.383 [2024-12-15 19:44:04.167295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.383 [2024-12-15 19:44:04.170750] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.383 [2024-12-15 19:44:04.170974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.383 [2024-12-15 19:44:04.170995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.383 [2024-12-15 19:44:04.174470] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.383 [2024-12-15 19:44:04.174615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.383 [2024-12-15 19:44:04.174636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.383 [2024-12-15 19:44:04.178302] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.383 [2024-12-15 19:44:04.178396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.383 [2024-12-15 19:44:04.178416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.383 [2024-12-15 19:44:04.182037] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.383 [2024-12-15 19:44:04.182137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.383 [2024-12-15 19:44:04.182158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.383 [2024-12-15 19:44:04.185717] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.383 [2024-12-15 19:44:04.185801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.383 [2024-12-15 19:44:04.185836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.383 [2024-12-15 19:44:04.189487] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.383 [2024-12-15 19:44:04.189628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.383 [2024-12-15 19:44:04.189648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.383 [2024-12-15 19:44:04.193351] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.383 [2024-12-15 19:44:04.193472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.383 [2024-12-15 19:44:04.193492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.383 [2024-12-15 19:44:04.197254] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.383 [2024-12-15 19:44:04.197444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.383 [2024-12-15 19:44:04.197465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.383 [2024-12-15 19:44:04.200983] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.383 [2024-12-15 19:44:04.201192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.383 [2024-12-15 19:44:04.201212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.383 [2024-12-15 19:44:04.204664] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.383 [2024-12-15 19:44:04.204809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.383 [2024-12-15 19:44:04.204843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.383 [2024-12-15 19:44:04.208557] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.383 [2024-12-15 19:44:04.208631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.383 [2024-12-15 19:44:04.208651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.383 [2024-12-15 19:44:04.212253] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.383 [2024-12-15 19:44:04.212325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.383 [2024-12-15 19:44:04.212344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.383 [2024-12-15 19:44:04.216125] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.383 [2024-12-15 19:44:04.216212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.383 [2024-12-15 19:44:04.216232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.383 [2024-12-15 19:44:04.219943] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.383 [2024-12-15 19:44:04.220066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.383 [2024-12-15 19:44:04.220086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.383 [2024-12-15 19:44:04.223637] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.383 [2024-12-15 19:44:04.223762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.383 [2024-12-15 19:44:04.223781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.383 [2024-12-15 19:44:04.227536] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.383 [2024-12-15 19:44:04.227713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.383 [2024-12-15 19:44:04.227733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.383 [2024-12-15 19:44:04.231305] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.383 [2024-12-15 19:44:04.231527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.383 [2024-12-15 19:44:04.231548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.383 [2024-12-15 19:44:04.235037] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.383 [2024-12-15 19:44:04.235126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.383 [2024-12-15 19:44:04.235147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.383 [2024-12-15 19:44:04.239090] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.383 [2024-12-15 19:44:04.239189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.383 [2024-12-15 19:44:04.239210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.383 [2024-12-15 19:44:04.243010] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.383 [2024-12-15 19:44:04.243103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.383 [2024-12-15 19:44:04.243123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.383 [2024-12-15 19:44:04.246864] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.383 [2024-12-15 19:44:04.247003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.383 [2024-12-15 19:44:04.247025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.383 [2024-12-15 19:44:04.250639] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.383 [2024-12-15 19:44:04.250783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.383 [2024-12-15 19:44:04.250803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.383 [2024-12-15 19:44:04.254442] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.383 [2024-12-15 19:44:04.254611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.383 [2024-12-15 19:44:04.254632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.383 [2024-12-15 19:44:04.258475] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.383 [2024-12-15 19:44:04.258676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.383 [2024-12-15 19:44:04.258718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.383 [2024-12-15 19:44:04.262258] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.383 [2024-12-15 19:44:04.262519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.384 [2024-12-15 19:44:04.262546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.384 [2024-12-15 19:44:04.266160] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.384 [2024-12-15 19:44:04.266238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.384 [2024-12-15 19:44:04.266258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.384 [2024-12-15 19:44:04.270097] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.384 [2024-12-15 19:44:04.270193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.384 [2024-12-15 19:44:04.270228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.384 [2024-12-15 19:44:04.273831] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.384 [2024-12-15 19:44:04.273933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.384 [2024-12-15 19:44:04.273953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.644 [2024-12-15 19:44:04.277658] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.644 [2024-12-15 19:44:04.277744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.644 [2024-12-15 19:44:04.277763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.644 [2024-12-15 19:44:04.281530] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.644 [2024-12-15 19:44:04.281651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.644 [2024-12-15 19:44:04.281671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.644 [2024-12-15 19:44:04.285338] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.644 [2024-12-15 19:44:04.285459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.644 [2024-12-15 19:44:04.285480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.644 [2024-12-15 19:44:04.289236] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.644 [2024-12-15 19:44:04.289419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.644 [2024-12-15 19:44:04.289440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.644 [2024-12-15 19:44:04.292993] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.644 [2024-12-15 19:44:04.293169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.644 [2024-12-15 19:44:04.293189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.644 [2024-12-15 19:44:04.296766] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.644 [2024-12-15 19:44:04.296955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.644 [2024-12-15 19:44:04.296976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.644 [2024-12-15 19:44:04.300490] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.644 [2024-12-15 19:44:04.300565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.644 [2024-12-15 19:44:04.300598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.644 [2024-12-15 19:44:04.304283] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.644 [2024-12-15 19:44:04.304357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.644 [2024-12-15 19:44:04.304377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.644 [2024-12-15 19:44:04.308216] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.644 [2024-12-15 19:44:04.308302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.644 [2024-12-15 19:44:04.308322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.644 [2024-12-15 19:44:04.312087] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.644 [2024-12-15 19:44:04.312214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.644 [2024-12-15 19:44:04.312234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.644 [2024-12-15 19:44:04.315848] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.644 [2024-12-15 19:44:04.315980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.644 [2024-12-15 19:44:04.316000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.644 [2024-12-15 19:44:04.319759] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.644 [2024-12-15 19:44:04.319949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.644 [2024-12-15 19:44:04.319970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.644 [2024-12-15 19:44:04.323522] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.644 [2024-12-15 19:44:04.323712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.644 [2024-12-15 19:44:04.323732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.644 [2024-12-15 19:44:04.327380] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.644 [2024-12-15 19:44:04.327525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.644 [2024-12-15 19:44:04.327545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.644 [2024-12-15 19:44:04.331121] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.644 [2024-12-15 19:44:04.331220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.644 [2024-12-15 19:44:04.331240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.644 [2024-12-15 19:44:04.334931] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.645 [2024-12-15 19:44:04.335012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.645 [2024-12-15 19:44:04.335031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.645 [2024-12-15 19:44:04.338626] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.645 [2024-12-15 19:44:04.338725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.645 [2024-12-15 19:44:04.338745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.645 [2024-12-15 19:44:04.342399] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.645 [2024-12-15 19:44:04.342527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.645 [2024-12-15 19:44:04.342548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.645 [2024-12-15 19:44:04.346263] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.645 [2024-12-15 19:44:04.346424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.645 [2024-12-15 19:44:04.346445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.645 [2024-12-15 19:44:04.350247] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.645 [2024-12-15 19:44:04.350431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.645 [2024-12-15 19:44:04.350452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.645 [2024-12-15 19:44:04.353979] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.645 [2024-12-15 19:44:04.354192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.645 [2024-12-15 19:44:04.354213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.645 [2024-12-15 19:44:04.357804] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.645 [2024-12-15 19:44:04.358015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.645 [2024-12-15 19:44:04.358034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.645 [2024-12-15 19:44:04.361465] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.645 [2024-12-15 19:44:04.361541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.645 [2024-12-15 19:44:04.361561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.645 [2024-12-15 19:44:04.365213] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.645 [2024-12-15 19:44:04.365306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.645 [2024-12-15 19:44:04.365326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.645 [2024-12-15 19:44:04.369013] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.645 [2024-12-15 19:44:04.369084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.645 [2024-12-15 19:44:04.369104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.645 [2024-12-15 19:44:04.372771] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.645 [2024-12-15 19:44:04.372923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.645 [2024-12-15 19:44:04.372943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.645 [2024-12-15 19:44:04.376475] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.645 [2024-12-15 19:44:04.376573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.645 [2024-12-15 19:44:04.376593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.645 [2024-12-15 19:44:04.380403] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.645 [2024-12-15 19:44:04.380601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.645 [2024-12-15 19:44:04.380621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.645 [2024-12-15 19:44:04.384044] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.645 [2024-12-15 19:44:04.384215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.645 [2024-12-15 19:44:04.384234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.645 [2024-12-15 19:44:04.387917] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.645 [2024-12-15 19:44:04.388063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.645 [2024-12-15 19:44:04.388084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.645 [2024-12-15 19:44:04.391598] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.645 [2024-12-15 19:44:04.391686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.645 [2024-12-15 19:44:04.391705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.645 [2024-12-15 19:44:04.395394] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.645 [2024-12-15 19:44:04.395480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.645 [2024-12-15 19:44:04.395500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.645 [2024-12-15 19:44:04.399192] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.645 [2024-12-15 19:44:04.399295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.645 [2024-12-15 19:44:04.399315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.645 [2024-12-15 19:44:04.402966] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.645 [2024-12-15 19:44:04.403101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.645 [2024-12-15 19:44:04.403122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.645 [2024-12-15 19:44:04.406668] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.645 [2024-12-15 19:44:04.406776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.645 [2024-12-15 19:44:04.406801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.645 [2024-12-15 19:44:04.410537] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.645 [2024-12-15 19:44:04.410731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.645 [2024-12-15 19:44:04.410751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.645 [2024-12-15 19:44:04.414329] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.645 [2024-12-15 19:44:04.414516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.645 [2024-12-15 19:44:04.414536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.645 [2024-12-15 19:44:04.418260] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.645 [2024-12-15 19:44:04.418429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.645 [2024-12-15 19:44:04.418450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.645 [2024-12-15 19:44:04.422078] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.645 [2024-12-15 19:44:04.422155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.645 [2024-12-15 19:44:04.422175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.645 [2024-12-15 19:44:04.425811] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.645 [2024-12-15 19:44:04.425923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.645 [2024-12-15 19:44:04.425943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.645 [2024-12-15 19:44:04.429630] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.645 [2024-12-15 19:44:04.429721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.645 [2024-12-15 19:44:04.429740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.645 [2024-12-15 19:44:04.433452] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.645 [2024-12-15 19:44:04.433590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.645 [2024-12-15 19:44:04.433610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.645 [2024-12-15 19:44:04.437308] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.645 [2024-12-15 19:44:04.437408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.645 [2024-12-15 19:44:04.437427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.646 [2024-12-15 19:44:04.441170] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.646 [2024-12-15 19:44:04.441346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.646 [2024-12-15 19:44:04.441366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.646 [2024-12-15 19:44:04.444905] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.646 [2024-12-15 19:44:04.445107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.646 [2024-12-15 19:44:04.445126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.646 [2024-12-15 19:44:04.448655] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.646 [2024-12-15 19:44:04.448798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.646 [2024-12-15 19:44:04.448818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.646 [2024-12-15 19:44:04.452525] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.646 [2024-12-15 19:44:04.452602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.646 [2024-12-15 19:44:04.452622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.646 [2024-12-15 19:44:04.456362] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.646 [2024-12-15 19:44:04.456460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.646 [2024-12-15 19:44:04.456479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.646 [2024-12-15 19:44:04.460164] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.646 [2024-12-15 19:44:04.460241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.646 [2024-12-15 19:44:04.460261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.646 [2024-12-15 19:44:04.463986] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.646 [2024-12-15 19:44:04.464132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.646 [2024-12-15 19:44:04.464152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.646 [2024-12-15 19:44:04.467817] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.646 [2024-12-15 19:44:04.467976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.646 [2024-12-15 19:44:04.467996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.646 [2024-12-15 19:44:04.471787] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.646 [2024-12-15 19:44:04.471988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.646 [2024-12-15 19:44:04.472009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.646 [2024-12-15 19:44:04.475514] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.646 [2024-12-15 19:44:04.475718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.646 [2024-12-15 19:44:04.475738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.646 [2024-12-15 19:44:04.479472] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.646 [2024-12-15 19:44:04.479624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.646 [2024-12-15 19:44:04.479645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.646 [2024-12-15 19:44:04.483404] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.646 [2024-12-15 19:44:04.483494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.646 [2024-12-15 19:44:04.483514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.646 [2024-12-15 19:44:04.487308] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.646 [2024-12-15 19:44:04.487389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.646 [2024-12-15 19:44:04.487409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.646 [2024-12-15 19:44:04.491255] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.646 [2024-12-15 19:44:04.491347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.646 [2024-12-15 19:44:04.491375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.646 [2024-12-15 19:44:04.495231] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.646 [2024-12-15 19:44:04.495368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.646 [2024-12-15 19:44:04.495403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.646 [2024-12-15 19:44:04.499186] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.646 [2024-12-15 19:44:04.499297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.646 [2024-12-15 19:44:04.499317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.646 [2024-12-15 19:44:04.503279] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.646 [2024-12-15 19:44:04.503451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.646 [2024-12-15 19:44:04.503471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.646 [2024-12-15 19:44:04.507190] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.646 [2024-12-15 19:44:04.507371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.646 [2024-12-15 19:44:04.507390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.646 [2024-12-15 19:44:04.510976] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.646 [2024-12-15 19:44:04.511115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.646 [2024-12-15 19:44:04.511136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.646 [2024-12-15 19:44:04.514884] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.646 [2024-12-15 19:44:04.514971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.646 [2024-12-15 19:44:04.514991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.646 [2024-12-15 19:44:04.518662] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.646 [2024-12-15 19:44:04.518756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.646 [2024-12-15 19:44:04.518792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.646 [2024-12-15 19:44:04.522438] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.646 [2024-12-15 19:44:04.522514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.646 [2024-12-15 19:44:04.522534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.646 [2024-12-15 19:44:04.526188] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.646 [2024-12-15 19:44:04.526312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.646 [2024-12-15 19:44:04.526332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.646 [2024-12-15 19:44:04.530047] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.646 [2024-12-15 19:44:04.530175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.646 [2024-12-15 19:44:04.530197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.646 [2024-12-15 19:44:04.533921] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.646 [2024-12-15 19:44:04.534107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.646 [2024-12-15 19:44:04.534128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.906 [2024-12-15 19:44:04.537738] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.906 [2024-12-15 19:44:04.537911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.907 [2024-12-15 19:44:04.537931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.907 [2024-12-15 19:44:04.541570] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.907 [2024-12-15 19:44:04.541715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.907 [2024-12-15 19:44:04.541735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.907 [2024-12-15 19:44:04.545377] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.907 [2024-12-15 19:44:04.545495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.907 [2024-12-15 19:44:04.545514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.907 [2024-12-15 19:44:04.549218] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.907 [2024-12-15 19:44:04.549294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.907 [2024-12-15 19:44:04.549314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.907 [2024-12-15 19:44:04.553003] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.907 [2024-12-15 19:44:04.553083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.907 [2024-12-15 19:44:04.553102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.907 [2024-12-15 19:44:04.556832] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.907 [2024-12-15 19:44:04.557018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.907 [2024-12-15 19:44:04.557039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.907 [2024-12-15 19:44:04.560635] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.907 [2024-12-15 19:44:04.560755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.907 [2024-12-15 19:44:04.560774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.907 [2024-12-15 19:44:04.564505] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.907 [2024-12-15 19:44:04.564696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.907 [2024-12-15 19:44:04.564716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.907 [2024-12-15 19:44:04.568280] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.907 [2024-12-15 19:44:04.568446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.907 [2024-12-15 19:44:04.568465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.907 [2024-12-15 19:44:04.572192] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.907 [2024-12-15 19:44:04.572334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.907 [2024-12-15 19:44:04.572354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.907 [2024-12-15 19:44:04.575912] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.907 [2024-12-15 19:44:04.576007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.907 [2024-12-15 19:44:04.576026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.907 [2024-12-15 19:44:04.579832] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.907 [2024-12-15 19:44:04.579922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.907 [2024-12-15 19:44:04.579942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.907 [2024-12-15 19:44:04.583557] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.907 [2024-12-15 19:44:04.583629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.907 [2024-12-15 19:44:04.583648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.907 [2024-12-15 19:44:04.587525] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.907 [2024-12-15 19:44:04.587647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.907 [2024-12-15 19:44:04.587667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.907 [2024-12-15 19:44:04.591368] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.907 [2024-12-15 19:44:04.591504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.907 [2024-12-15 19:44:04.591525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.907 [2024-12-15 19:44:04.595256] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.907 [2024-12-15 19:44:04.595427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.907 [2024-12-15 19:44:04.595446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.907 [2024-12-15 19:44:04.599141] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.907 [2024-12-15 19:44:04.599385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.907 [2024-12-15 19:44:04.599406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.907 [2024-12-15 19:44:04.602857] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.907 [2024-12-15 19:44:04.602950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.907 [2024-12-15 19:44:04.602970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.907 [2024-12-15 19:44:04.606676] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.907 [2024-12-15 19:44:04.606781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.907 [2024-12-15 19:44:04.606801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.907 [2024-12-15 19:44:04.610546] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.907 [2024-12-15 19:44:04.610632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.907 [2024-12-15 19:44:04.610660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.907 [2024-12-15 19:44:04.614356] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.907 [2024-12-15 19:44:04.614437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.907 [2024-12-15 19:44:04.614457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.907 [2024-12-15 19:44:04.618202] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.907 [2024-12-15 19:44:04.618326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.907 [2024-12-15 19:44:04.618372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.907 [2024-12-15 19:44:04.622063] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.907 [2024-12-15 19:44:04.622174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.907 [2024-12-15 19:44:04.622193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.907 [2024-12-15 19:44:04.625858] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.907 [2024-12-15 19:44:04.626036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.907 [2024-12-15 19:44:04.626057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.907 [2024-12-15 19:44:04.629680] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.907 [2024-12-15 19:44:04.629855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.907 [2024-12-15 19:44:04.629875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.907 [2024-12-15 19:44:04.633568] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.907 [2024-12-15 19:44:04.633713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.907 [2024-12-15 19:44:04.633734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.907 [2024-12-15 19:44:04.637427] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.907 [2024-12-15 19:44:04.637526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.907 [2024-12-15 19:44:04.637544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.907 [2024-12-15 19:44:04.641231] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.907 [2024-12-15 19:44:04.641309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.907 [2024-12-15 19:44:04.641329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.908 [2024-12-15 19:44:04.644969] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.908 [2024-12-15 19:44:04.645071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.908 [2024-12-15 19:44:04.645092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.908 [2024-12-15 19:44:04.648990] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.908 [2024-12-15 19:44:04.649113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.908 [2024-12-15 19:44:04.649132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.908 [2024-12-15 19:44:04.652753] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.908 [2024-12-15 19:44:04.652886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.908 [2024-12-15 19:44:04.652907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.908 [2024-12-15 19:44:04.656587] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.908 [2024-12-15 19:44:04.656754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.908 [2024-12-15 19:44:04.656773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.908 [2024-12-15 19:44:04.660463] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.908 [2024-12-15 19:44:04.660693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.908 [2024-12-15 19:44:04.660713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.908 [2024-12-15 19:44:04.664165] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.908 [2024-12-15 19:44:04.664250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.908 [2024-12-15 19:44:04.664271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.908 [2024-12-15 19:44:04.668119] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.908 [2024-12-15 19:44:04.668241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.908 [2024-12-15 19:44:04.668261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.908 [2024-12-15 19:44:04.672032] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.908 [2024-12-15 19:44:04.672150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.908 [2024-12-15 19:44:04.672172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.908 [2024-12-15 19:44:04.675975] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.908 [2024-12-15 19:44:04.676061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.908 [2024-12-15 19:44:04.676097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.908 [2024-12-15 19:44:04.679982] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.908 [2024-12-15 19:44:04.680116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.908 [2024-12-15 19:44:04.680135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.908 [2024-12-15 19:44:04.683668] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.908 [2024-12-15 19:44:04.683801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.908 [2024-12-15 19:44:04.683822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.908 [2024-12-15 19:44:04.687592] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.908 [2024-12-15 19:44:04.687759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.908 [2024-12-15 19:44:04.687779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.908 [2024-12-15 19:44:04.691498] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.908 [2024-12-15 19:44:04.691700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.908 [2024-12-15 19:44:04.691719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.908 [2024-12-15 19:44:04.695273] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.908 [2024-12-15 19:44:04.695416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.908 [2024-12-15 19:44:04.695436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.908 [2024-12-15 19:44:04.699090] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.908 [2024-12-15 19:44:04.699184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.908 [2024-12-15 19:44:04.699205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.908 [2024-12-15 19:44:04.702742] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.908 [2024-12-15 19:44:04.702844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.908 [2024-12-15 19:44:04.702864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.908 [2024-12-15 19:44:04.706359] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.908 [2024-12-15 19:44:04.706451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.908 [2024-12-15 19:44:04.706472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.908 [2024-12-15 19:44:04.710286] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.908 [2024-12-15 19:44:04.710424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.908 [2024-12-15 19:44:04.710444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.908 [2024-12-15 19:44:04.714098] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.908 [2024-12-15 19:44:04.714224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.908 [2024-12-15 19:44:04.714244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.908 [2024-12-15 19:44:04.717887] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.908 [2024-12-15 19:44:04.718083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.908 [2024-12-15 19:44:04.718103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.908 [2024-12-15 19:44:04.721663] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.908 [2024-12-15 19:44:04.721880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.908 [2024-12-15 19:44:04.721904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.908 [2024-12-15 19:44:04.725390] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.908 [2024-12-15 19:44:04.725484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.908 [2024-12-15 19:44:04.725505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.908 [2024-12-15 19:44:04.729307] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.908 [2024-12-15 19:44:04.729391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.908 [2024-12-15 19:44:04.729411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.908 [2024-12-15 19:44:04.733118] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.908 [2024-12-15 19:44:04.733211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.908 [2024-12-15 19:44:04.733231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.908 [2024-12-15 19:44:04.736840] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.908 [2024-12-15 19:44:04.736930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.908 [2024-12-15 19:44:04.736951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.908 [2024-12-15 19:44:04.740640] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.908 [2024-12-15 19:44:04.740767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.908 [2024-12-15 19:44:04.740788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.908 [2024-12-15 19:44:04.744405] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.908 [2024-12-15 19:44:04.744526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.908 [2024-12-15 19:44:04.744547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.908 [2024-12-15 19:44:04.748290] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.908 [2024-12-15 19:44:04.748480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.909 [2024-12-15 19:44:04.748500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.909 [2024-12-15 19:44:04.752157] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.909 [2024-12-15 19:44:04.752378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.909 [2024-12-15 19:44:04.752399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.909 [2024-12-15 19:44:04.755926] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.909 [2024-12-15 19:44:04.756003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.909 [2024-12-15 19:44:04.756024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.909 [2024-12-15 19:44:04.759767] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.909 [2024-12-15 19:44:04.759858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.909 [2024-12-15 19:44:04.759890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.909 [2024-12-15 19:44:04.763451] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.909 [2024-12-15 19:44:04.763548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.909 [2024-12-15 19:44:04.763567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.909 [2024-12-15 19:44:04.767215] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.909 [2024-12-15 19:44:04.767292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.909 [2024-12-15 19:44:04.767312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.909 [2024-12-15 19:44:04.771013] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.909 [2024-12-15 19:44:04.771138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.909 [2024-12-15 19:44:04.771159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.909 [2024-12-15 19:44:04.774731] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.909 [2024-12-15 19:44:04.774860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.909 [2024-12-15 19:44:04.774881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.909 [2024-12-15 19:44:04.778558] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.909 [2024-12-15 19:44:04.778746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.909 [2024-12-15 19:44:04.778766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.909 [2024-12-15 19:44:04.782311] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.909 [2024-12-15 19:44:04.782514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.909 [2024-12-15 19:44:04.782534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:17.909 [2024-12-15 19:44:04.786053] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.909 [2024-12-15 19:44:04.786218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.909 [2024-12-15 19:44:04.786238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.909 [2024-12-15 19:44:04.789840] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.909 [2024-12-15 19:44:04.789940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.909 [2024-12-15 19:44:04.789960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.909 [2024-12-15 19:44:04.793546] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.909 [2024-12-15 19:44:04.793629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.909 [2024-12-15 19:44:04.793648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.909 [2024-12-15 19:44:04.797380] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:17.909 [2024-12-15 19:44:04.797468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.909 [2024-12-15 19:44:04.797487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.169 [2024-12-15 19:44:04.801153] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.169 [2024-12-15 19:44:04.801284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.169 [2024-12-15 19:44:04.801304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.169 [2024-12-15 19:44:04.804877] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.169 [2024-12-15 19:44:04.804976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.169 [2024-12-15 19:44:04.804996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.169 [2024-12-15 19:44:04.808700] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.169 [2024-12-15 19:44:04.808909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.169 [2024-12-15 19:44:04.808931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.169 [2024-12-15 19:44:04.812542] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.169 [2024-12-15 19:44:04.812736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.169 [2024-12-15 19:44:04.812757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.169 [2024-12-15 19:44:04.816363] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.169 [2024-12-15 19:44:04.816502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.169 [2024-12-15 19:44:04.816522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.169 [2024-12-15 19:44:04.820308] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.169 [2024-12-15 19:44:04.820385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.170 [2024-12-15 19:44:04.820421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.170 [2024-12-15 19:44:04.824026] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.170 [2024-12-15 19:44:04.824102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.170 [2024-12-15 19:44:04.824123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.170 [2024-12-15 19:44:04.827751] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.170 [2024-12-15 19:44:04.827860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.170 [2024-12-15 19:44:04.827892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.170 [2024-12-15 19:44:04.831580] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.170 [2024-12-15 19:44:04.831706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.170 [2024-12-15 19:44:04.831726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.170 [2024-12-15 19:44:04.835337] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.170 [2024-12-15 19:44:04.835461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.170 [2024-12-15 19:44:04.835481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.170 [2024-12-15 19:44:04.839270] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.170 [2024-12-15 19:44:04.839447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.170 [2024-12-15 19:44:04.839469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.170 [2024-12-15 19:44:04.843062] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.170 [2024-12-15 19:44:04.843250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.170 [2024-12-15 19:44:04.843271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.170 [2024-12-15 19:44:04.846877] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.170 [2024-12-15 19:44:04.847017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.170 [2024-12-15 19:44:04.847037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.170 [2024-12-15 19:44:04.850694] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.170 [2024-12-15 19:44:04.850788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.170 [2024-12-15 19:44:04.850808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.170 [2024-12-15 19:44:04.854415] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.170 [2024-12-15 19:44:04.854510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.170 [2024-12-15 19:44:04.854530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.170 [2024-12-15 19:44:04.858181] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.170 [2024-12-15 19:44:04.858257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.170 [2024-12-15 19:44:04.858277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.170 [2024-12-15 19:44:04.861953] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.170 [2024-12-15 19:44:04.862085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.170 [2024-12-15 19:44:04.862105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.170 [2024-12-15 19:44:04.865689] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.170 [2024-12-15 19:44:04.865862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.170 [2024-12-15 19:44:04.865883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.170 [2024-12-15 19:44:04.869526] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.170 [2024-12-15 19:44:04.869712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.170 [2024-12-15 19:44:04.869733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.170 [2024-12-15 19:44:04.873308] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.170 [2024-12-15 19:44:04.873480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.170 [2024-12-15 19:44:04.873500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.170 [2024-12-15 19:44:04.877036] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.170 [2024-12-15 19:44:04.877189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.170 [2024-12-15 19:44:04.877210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.170 [2024-12-15 19:44:04.880797] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.170 [2024-12-15 19:44:04.880888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.170 [2024-12-15 19:44:04.880908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.170 [2024-12-15 19:44:04.884477] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.170 [2024-12-15 19:44:04.884566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.170 [2024-12-15 19:44:04.884586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.170 [2024-12-15 19:44:04.888256] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.170 [2024-12-15 19:44:04.888345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.170 [2024-12-15 19:44:04.888366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.170 [2024-12-15 19:44:04.892180] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.170 [2024-12-15 19:44:04.892306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.170 [2024-12-15 19:44:04.892327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.170 [2024-12-15 19:44:04.895923] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.170 [2024-12-15 19:44:04.896050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.170 [2024-12-15 19:44:04.896070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.170 [2024-12-15 19:44:04.899772] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.170 [2024-12-15 19:44:04.899977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.170 [2024-12-15 19:44:04.899998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.170 [2024-12-15 19:44:04.903507] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.170 [2024-12-15 19:44:04.903680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.170 [2024-12-15 19:44:04.903700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.170 [2024-12-15 19:44:04.907310] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.170 [2024-12-15 19:44:04.907462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.170 [2024-12-15 19:44:04.907482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.170 [2024-12-15 19:44:04.911054] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.170 [2024-12-15 19:44:04.911155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.170 [2024-12-15 19:44:04.911175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.170 [2024-12-15 19:44:04.914767] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.170 [2024-12-15 19:44:04.914875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.170 [2024-12-15 19:44:04.914896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.170 [2024-12-15 19:44:04.918461] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.170 [2024-12-15 19:44:04.918544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.170 [2024-12-15 19:44:04.918564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.170 [2024-12-15 19:44:04.922361] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.170 [2024-12-15 19:44:04.922491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.170 [2024-12-15 19:44:04.922511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.170 [2024-12-15 19:44:04.926063] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.171 [2024-12-15 19:44:04.926178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.171 [2024-12-15 19:44:04.926198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.171 [2024-12-15 19:44:04.929897] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.171 [2024-12-15 19:44:04.930079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.171 [2024-12-15 19:44:04.930100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.171 [2024-12-15 19:44:04.933606] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.171 [2024-12-15 19:44:04.933796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.171 [2024-12-15 19:44:04.933828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.171 [2024-12-15 19:44:04.937417] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.171 [2024-12-15 19:44:04.937563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.171 [2024-12-15 19:44:04.937583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.171 [2024-12-15 19:44:04.941189] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.171 [2024-12-15 19:44:04.941293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.171 [2024-12-15 19:44:04.941314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.171 [2024-12-15 19:44:04.944954] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.171 [2024-12-15 19:44:04.945034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.171 [2024-12-15 19:44:04.945054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.171 [2024-12-15 19:44:04.948685] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.171 [2024-12-15 19:44:04.948760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.171 [2024-12-15 19:44:04.948781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.171 [2024-12-15 19:44:04.952494] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.171 [2024-12-15 19:44:04.952627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.171 [2024-12-15 19:44:04.952648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.171 [2024-12-15 19:44:04.956235] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.171 [2024-12-15 19:44:04.956341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.171 [2024-12-15 19:44:04.956361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.171 [2024-12-15 19:44:04.960218] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.171 [2024-12-15 19:44:04.960394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.171 [2024-12-15 19:44:04.960415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.171 [2024-12-15 19:44:04.964016] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.171 [2024-12-15 19:44:04.964184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.171 [2024-12-15 19:44:04.964205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.171 [2024-12-15 19:44:04.967768] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.171 [2024-12-15 19:44:04.967934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.171 [2024-12-15 19:44:04.967955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.171 [2024-12-15 19:44:04.971614] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.171 [2024-12-15 19:44:04.971704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.171 [2024-12-15 19:44:04.971725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.171 [2024-12-15 19:44:04.975378] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.171 [2024-12-15 19:44:04.975467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.171 [2024-12-15 19:44:04.975487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.171 [2024-12-15 19:44:04.979215] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.171 [2024-12-15 19:44:04.979322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.171 [2024-12-15 19:44:04.979342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.171 [2024-12-15 19:44:04.983067] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.171 [2024-12-15 19:44:04.983194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.171 [2024-12-15 19:44:04.983215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.171 [2024-12-15 19:44:04.986755] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.171 [2024-12-15 19:44:04.986886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.171 [2024-12-15 19:44:04.986907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.171 [2024-12-15 19:44:04.990614] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.171 [2024-12-15 19:44:04.990794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.171 [2024-12-15 19:44:04.990816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.171 [2024-12-15 19:44:04.994476] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.171 [2024-12-15 19:44:04.994678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.171 [2024-12-15 19:44:04.994699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.171 [2024-12-15 19:44:04.998282] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.171 [2024-12-15 19:44:04.998418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.171 [2024-12-15 19:44:04.998439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.171 [2024-12-15 19:44:05.002066] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.171 [2024-12-15 19:44:05.002151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.171 [2024-12-15 19:44:05.002172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.171 [2024-12-15 19:44:05.005866] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.171 [2024-12-15 19:44:05.005948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.171 [2024-12-15 19:44:05.005969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.171 [2024-12-15 19:44:05.009687] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.171 [2024-12-15 19:44:05.009761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.171 [2024-12-15 19:44:05.009781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.171 [2024-12-15 19:44:05.013720] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.171 [2024-12-15 19:44:05.013875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.171 [2024-12-15 19:44:05.013897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.171 [2024-12-15 19:44:05.017678] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.171 [2024-12-15 19:44:05.017784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.171 [2024-12-15 19:44:05.017805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.171 [2024-12-15 19:44:05.021648] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.171 [2024-12-15 19:44:05.021842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.171 [2024-12-15 19:44:05.021875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.171 [2024-12-15 19:44:05.025441] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.171 [2024-12-15 19:44:05.025661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.171 [2024-12-15 19:44:05.025682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.171 [2024-12-15 19:44:05.029218] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.171 [2024-12-15 19:44:05.029372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.171 [2024-12-15 19:44:05.029393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.171 [2024-12-15 19:44:05.033004] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.172 [2024-12-15 19:44:05.033103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.172 [2024-12-15 19:44:05.033124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.172 [2024-12-15 19:44:05.036737] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.172 [2024-12-15 19:44:05.036827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.172 [2024-12-15 19:44:05.036860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.172 [2024-12-15 19:44:05.040502] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.172 [2024-12-15 19:44:05.040601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.172 [2024-12-15 19:44:05.040622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.172 [2024-12-15 19:44:05.044326] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.172 [2024-12-15 19:44:05.044454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.172 [2024-12-15 19:44:05.044475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.172 [2024-12-15 19:44:05.048160] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.172 [2024-12-15 19:44:05.048280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.172 [2024-12-15 19:44:05.048301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.172 [2024-12-15 19:44:05.052061] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.172 [2024-12-15 19:44:05.052239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.172 [2024-12-15 19:44:05.052260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.172 [2024-12-15 19:44:05.055776] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.172 [2024-12-15 19:44:05.055958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.172 [2024-12-15 19:44:05.055979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.172 [2024-12-15 19:44:05.059526] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.172 [2024-12-15 19:44:05.059680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.172 [2024-12-15 19:44:05.059702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.431 [2024-12-15 19:44:05.063354] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.431 [2024-12-15 19:44:05.063455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.431 [2024-12-15 19:44:05.063475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.431 [2024-12-15 19:44:05.067174] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.431 [2024-12-15 19:44:05.067263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.431 [2024-12-15 19:44:05.067283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.431 [2024-12-15 19:44:05.070963] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.431 [2024-12-15 19:44:05.071059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.431 [2024-12-15 19:44:05.071080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.431 [2024-12-15 19:44:05.074761] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.431 [2024-12-15 19:44:05.074900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.431 [2024-12-15 19:44:05.074921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.431 [2024-12-15 19:44:05.078469] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.431 [2024-12-15 19:44:05.078589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.431 [2024-12-15 19:44:05.078610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.431 [2024-12-15 19:44:05.082433] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.431 [2024-12-15 19:44:05.082615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.431 [2024-12-15 19:44:05.082636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.431 [2024-12-15 19:44:05.086207] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.431 [2024-12-15 19:44:05.086455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.431 [2024-12-15 19:44:05.086476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.431 [2024-12-15 19:44:05.089947] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.431 [2024-12-15 19:44:05.090095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.431 [2024-12-15 19:44:05.090115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.431 [2024-12-15 19:44:05.093807] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.431 [2024-12-15 19:44:05.093896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.431 [2024-12-15 19:44:05.093917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.431 [2024-12-15 19:44:05.097562] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.431 [2024-12-15 19:44:05.097639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.431 [2024-12-15 19:44:05.097660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.431 [2024-12-15 19:44:05.101409] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.431 [2024-12-15 19:44:05.101485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.431 [2024-12-15 19:44:05.101505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.431 [2024-12-15 19:44:05.105236] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.431 [2024-12-15 19:44:05.105366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.431 [2024-12-15 19:44:05.105386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.431 [2024-12-15 19:44:05.108932] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.431 [2024-12-15 19:44:05.109092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.431 [2024-12-15 19:44:05.109113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.431 [2024-12-15 19:44:05.112762] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.431 [2024-12-15 19:44:05.112955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.431 [2024-12-15 19:44:05.112977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.431 [2024-12-15 19:44:05.116453] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.431 [2024-12-15 19:44:05.116679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.431 [2024-12-15 19:44:05.116699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.431 [2024-12-15 19:44:05.120223] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.431 [2024-12-15 19:44:05.120373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.431 [2024-12-15 19:44:05.120394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.431 [2024-12-15 19:44:05.124112] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.431 [2024-12-15 19:44:05.124207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.431 [2024-12-15 19:44:05.124227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.431 [2024-12-15 19:44:05.127798] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.431 [2024-12-15 19:44:05.127890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.431 [2024-12-15 19:44:05.127910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.431 [2024-12-15 19:44:05.131534] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.431 [2024-12-15 19:44:05.131613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.431 [2024-12-15 19:44:05.131633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.431 [2024-12-15 19:44:05.135406] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.431 [2024-12-15 19:44:05.135534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.431 [2024-12-15 19:44:05.135554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.431 [2024-12-15 19:44:05.139107] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.431 [2024-12-15 19:44:05.139222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.431 [2024-12-15 19:44:05.139242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.431 [2024-12-15 19:44:05.143058] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.431 [2024-12-15 19:44:05.143235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.431 [2024-12-15 19:44:05.143256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.431 [2024-12-15 19:44:05.146790] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.431 [2024-12-15 19:44:05.147007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.431 [2024-12-15 19:44:05.147028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.431 [2024-12-15 19:44:05.150489] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.431 [2024-12-15 19:44:05.150644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.432 [2024-12-15 19:44:05.150664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.432 [2024-12-15 19:44:05.154370] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.432 [2024-12-15 19:44:05.154468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.432 [2024-12-15 19:44:05.154489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.432 [2024-12-15 19:44:05.158113] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.432 [2024-12-15 19:44:05.158195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.432 [2024-12-15 19:44:05.158215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.432 [2024-12-15 19:44:05.161918] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.432 [2024-12-15 19:44:05.161995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.432 [2024-12-15 19:44:05.162016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.432 [2024-12-15 19:44:05.165684] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.432 [2024-12-15 19:44:05.165811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.432 [2024-12-15 19:44:05.165843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.432 [2024-12-15 19:44:05.169435] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.432 [2024-12-15 19:44:05.169556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.432 [2024-12-15 19:44:05.169578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.432 [2024-12-15 19:44:05.173376] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.432 [2024-12-15 19:44:05.173556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.432 [2024-12-15 19:44:05.173577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.432 [2024-12-15 19:44:05.177129] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.432 [2024-12-15 19:44:05.177337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.432 [2024-12-15 19:44:05.177358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.432 [2024-12-15 19:44:05.180864] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.432 [2024-12-15 19:44:05.181008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.432 [2024-12-15 19:44:05.181029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.432 [2024-12-15 19:44:05.184637] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.432 [2024-12-15 19:44:05.184737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.432 [2024-12-15 19:44:05.184758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.432 [2024-12-15 19:44:05.188329] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.432 [2024-12-15 19:44:05.188412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.432 [2024-12-15 19:44:05.188432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.432 [2024-12-15 19:44:05.192137] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.432 [2024-12-15 19:44:05.192227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.432 [2024-12-15 19:44:05.192247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.432 [2024-12-15 19:44:05.195939] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.432 [2024-12-15 19:44:05.196068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.432 [2024-12-15 19:44:05.196088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.432 [2024-12-15 19:44:05.199656] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.432 [2024-12-15 19:44:05.199783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.432 [2024-12-15 19:44:05.199803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.432 [2024-12-15 19:44:05.203526] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.432 [2024-12-15 19:44:05.203702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.432 [2024-12-15 19:44:05.203722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.432 [2024-12-15 19:44:05.207302] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.432 [2024-12-15 19:44:05.207497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.432 [2024-12-15 19:44:05.207518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.432 [2024-12-15 19:44:05.211251] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.432 [2024-12-15 19:44:05.211399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.432 [2024-12-15 19:44:05.211419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.432 [2024-12-15 19:44:05.215027] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.432 [2024-12-15 19:44:05.215132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.432 [2024-12-15 19:44:05.215152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.432 [2024-12-15 19:44:05.218700] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.432 [2024-12-15 19:44:05.218776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.432 [2024-12-15 19:44:05.218796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.432 [2024-12-15 19:44:05.222492] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.432 [2024-12-15 19:44:05.222569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.432 [2024-12-15 19:44:05.222589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.432 [2024-12-15 19:44:05.226258] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.432 [2024-12-15 19:44:05.226394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.432 [2024-12-15 19:44:05.226415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.432 [2024-12-15 19:44:05.229996] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.432 [2024-12-15 19:44:05.230122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.432 [2024-12-15 19:44:05.230143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.432 [2024-12-15 19:44:05.233865] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.432 [2024-12-15 19:44:05.234044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.432 [2024-12-15 19:44:05.234064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.432 [2024-12-15 19:44:05.237612] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.432 [2024-12-15 19:44:05.237855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.432 [2024-12-15 19:44:05.237876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.432 [2024-12-15 19:44:05.241471] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.432 [2024-12-15 19:44:05.241619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.432 [2024-12-15 19:44:05.241639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.432 [2024-12-15 19:44:05.245248] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.432 [2024-12-15 19:44:05.245325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.432 [2024-12-15 19:44:05.245345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.432 [2024-12-15 19:44:05.249083] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.432 [2024-12-15 19:44:05.249175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.432 [2024-12-15 19:44:05.249196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.432 [2024-12-15 19:44:05.252884] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.433 [2024-12-15 19:44:05.252978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.433 [2024-12-15 19:44:05.252998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.433 [2024-12-15 19:44:05.256737] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.433 [2024-12-15 19:44:05.256883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.433 [2024-12-15 19:44:05.256904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.433 [2024-12-15 19:44:05.260489] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.433 [2024-12-15 19:44:05.260599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.433 [2024-12-15 19:44:05.260620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.433 [2024-12-15 19:44:05.264404] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.433 [2024-12-15 19:44:05.264583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.433 [2024-12-15 19:44:05.264604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.433 [2024-12-15 19:44:05.268156] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.433 [2024-12-15 19:44:05.268331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.433 [2024-12-15 19:44:05.268353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.433 [2024-12-15 19:44:05.271963] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.433 [2024-12-15 19:44:05.272115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.433 [2024-12-15 19:44:05.272136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.433 [2024-12-15 19:44:05.275691] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.433 [2024-12-15 19:44:05.275790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.433 [2024-12-15 19:44:05.275811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.433 [2024-12-15 19:44:05.279431] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.433 [2024-12-15 19:44:05.279525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.433 [2024-12-15 19:44:05.279546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.433 [2024-12-15 19:44:05.283169] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.433 [2024-12-15 19:44:05.283268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.433 [2024-12-15 19:44:05.283289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.433 [2024-12-15 19:44:05.286889] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.433 [2024-12-15 19:44:05.287019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.433 [2024-12-15 19:44:05.287040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.433 [2024-12-15 19:44:05.290620] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.433 [2024-12-15 19:44:05.290740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.433 [2024-12-15 19:44:05.290761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.433 [2024-12-15 19:44:05.294514] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.433 [2024-12-15 19:44:05.294699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.433 [2024-12-15 19:44:05.294727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.433 [2024-12-15 19:44:05.298238] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.433 [2024-12-15 19:44:05.298411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.433 [2024-12-15 19:44:05.298432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.433 [2024-12-15 19:44:05.302042] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.433 [2024-12-15 19:44:05.302194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.433 [2024-12-15 19:44:05.302215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.433 [2024-12-15 19:44:05.305879] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.433 [2024-12-15 19:44:05.305979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.433 [2024-12-15 19:44:05.306000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.433 [2024-12-15 19:44:05.309530] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.433 [2024-12-15 19:44:05.309623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.433 [2024-12-15 19:44:05.309643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.433 [2024-12-15 19:44:05.313323] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.433 [2024-12-15 19:44:05.313420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.433 [2024-12-15 19:44:05.313440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.433 [2024-12-15 19:44:05.317115] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.433 [2024-12-15 19:44:05.317245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.433 [2024-12-15 19:44:05.317265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.433 [2024-12-15 19:44:05.320862] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.433 [2024-12-15 19:44:05.320978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.433 [2024-12-15 19:44:05.320999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.691 [2024-12-15 19:44:05.324680] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.691 [2024-12-15 19:44:05.324871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.691 [2024-12-15 19:44:05.324891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.691 [2024-12-15 19:44:05.328421] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.691 [2024-12-15 19:44:05.328640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.691 [2024-12-15 19:44:05.328667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.691 [2024-12-15 19:44:05.332293] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.691 [2024-12-15 19:44:05.332387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.691 [2024-12-15 19:44:05.332407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.691 [2024-12-15 19:44:05.336131] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.691 [2024-12-15 19:44:05.336235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.691 [2024-12-15 19:44:05.336256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.691 [2024-12-15 19:44:05.339846] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.691 [2024-12-15 19:44:05.339937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.691 [2024-12-15 19:44:05.339957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.691 [2024-12-15 19:44:05.343638] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.691 [2024-12-15 19:44:05.343713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.691 [2024-12-15 19:44:05.343734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.691 [2024-12-15 19:44:05.347404] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.691 [2024-12-15 19:44:05.347533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.691 [2024-12-15 19:44:05.347553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.691 [2024-12-15 19:44:05.351214] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.692 [2024-12-15 19:44:05.351335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.692 [2024-12-15 19:44:05.351356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.692 [2024-12-15 19:44:05.355079] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.692 [2024-12-15 19:44:05.355256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.692 [2024-12-15 19:44:05.355277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.692 [2024-12-15 19:44:05.358830] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.692 [2024-12-15 19:44:05.359010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.692 [2024-12-15 19:44:05.359031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.692 [2024-12-15 19:44:05.362537] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7caba0) with pdu=0x2000190fef90 00:23:18.692 [2024-12-15 19:44:05.362608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.692 [2024-12-15 19:44:05.362629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.692 00:23:18.692 Latency(us) 00:23:18.692 [2024-12-15T19:44:05.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.692 [2024-12-15T19:44:05.588Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:18.692 nvme0n1 : 2.00 8085.78 1010.72 0.00 0.00 1974.31 1630.95 11677.32 00:23:18.692 [2024-12-15T19:44:05.588Z] =================================================================================================================== 00:23:18.692 [2024-12-15T19:44:05.588Z] Total : 8085.78 1010.72 0.00 0.00 1974.31 1630.95 11677.32 00:23:18.692 0 00:23:18.692 19:44:05 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:18.692 19:44:05 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:18.692 19:44:05 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:18.692 19:44:05 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:18.692 | .driver_specific 00:23:18.692 | .nvme_error 00:23:18.692 | .status_code 00:23:18.692 | .command_transient_transport_error' 00:23:18.962 19:44:05 -- host/digest.sh@71 -- # (( 522 > 0 )) 00:23:18.962 19:44:05 -- host/digest.sh@73 -- # killprocess 97885 00:23:18.962 19:44:05 -- common/autotest_common.sh@936 -- # '[' -z 97885 ']' 00:23:18.962 19:44:05 -- common/autotest_common.sh@940 -- # kill -0 97885 00:23:18.962 19:44:05 -- common/autotest_common.sh@941 -- # uname 00:23:18.962 19:44:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:18.962 19:44:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97885 00:23:18.962 19:44:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:18.962 killing process with pid 97885 00:23:18.962 Received shutdown signal, test time was about 2.000000 seconds 00:23:18.962 00:23:18.962 Latency(us) 00:23:18.962 [2024-12-15T19:44:05.858Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.962 [2024-12-15T19:44:05.858Z] =================================================================================================================== 00:23:18.962 [2024-12-15T19:44:05.858Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:18.962 19:44:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:18.962 19:44:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97885' 00:23:18.962 19:44:05 -- common/autotest_common.sh@955 -- # kill 97885 00:23:18.962 19:44:05 -- common/autotest_common.sh@960 -- # wait 97885 00:23:19.233 19:44:05 -- host/digest.sh@115 -- # killprocess 97589 00:23:19.233 19:44:05 -- common/autotest_common.sh@936 -- # '[' -z 97589 ']' 00:23:19.233 19:44:05 -- common/autotest_common.sh@940 -- # kill -0 97589 00:23:19.233 19:44:05 -- common/autotest_common.sh@941 -- # uname 00:23:19.233 19:44:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:19.233 19:44:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97589 00:23:19.233 19:44:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:19.233 killing process with pid 97589 00:23:19.233 19:44:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:19.233 19:44:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97589' 00:23:19.233 19:44:06 -- common/autotest_common.sh@955 -- # kill 97589 00:23:19.233 19:44:06 -- common/autotest_common.sh@960 -- # wait 97589 00:23:19.491 00:23:19.491 real 0m18.644s 00:23:19.491 user 0m36.068s 00:23:19.491 sys 0m5.080s 00:23:19.491 19:44:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:19.491 19:44:06 -- common/autotest_common.sh@10 -- # set +x 00:23:19.491 ************************************ 00:23:19.491 END TEST nvmf_digest_error 00:23:19.491 ************************************ 00:23:19.491 19:44:06 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:23:19.491 19:44:06 -- host/digest.sh@139 -- # nvmftestfini 00:23:19.491 19:44:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:19.491 19:44:06 -- nvmf/common.sh@116 -- # sync 00:23:19.750 19:44:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:19.750 19:44:06 -- nvmf/common.sh@119 -- # set +e 00:23:19.750 19:44:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:19.750 19:44:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:19.750 rmmod nvme_tcp 00:23:19.750 rmmod nvme_fabrics 00:23:19.750 rmmod nvme_keyring 00:23:19.750 19:44:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:19.750 19:44:06 -- nvmf/common.sh@123 -- # set -e 00:23:19.750 19:44:06 -- nvmf/common.sh@124 -- # return 0 00:23:19.750 19:44:06 -- nvmf/common.sh@477 -- # '[' -n 97589 ']' 00:23:19.750 19:44:06 -- nvmf/common.sh@478 -- # killprocess 97589 00:23:19.750 19:44:06 -- common/autotest_common.sh@936 -- # '[' -z 97589 ']' 00:23:19.750 19:44:06 -- common/autotest_common.sh@940 -- # kill -0 97589 00:23:19.750 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (97589) - No such process 00:23:19.750 Process with pid 97589 is not found 00:23:19.750 19:44:06 -- common/autotest_common.sh@963 -- # echo 'Process with pid 97589 is not found' 00:23:19.750 19:44:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:19.750 19:44:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:19.750 19:44:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:19.750 19:44:06 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:19.750 19:44:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:19.750 19:44:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.750 19:44:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:19.750 19:44:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.750 19:44:06 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:19.750 00:23:19.750 real 0m36.611s 00:23:19.750 user 1m8.791s 00:23:19.750 sys 0m10.331s 00:23:19.750 19:44:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:19.750 19:44:06 -- common/autotest_common.sh@10 -- # set +x 00:23:19.750 ************************************ 00:23:19.750 END TEST nvmf_digest 00:23:19.750 ************************************ 00:23:19.750 19:44:06 -- nvmf/nvmf.sh@110 -- # [[ 1 -eq 1 ]] 00:23:19.750 19:44:06 -- nvmf/nvmf.sh@110 -- # [[ tcp == \t\c\p ]] 00:23:19.750 19:44:06 -- nvmf/nvmf.sh@112 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:23:19.750 19:44:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:19.750 19:44:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:19.750 19:44:06 -- common/autotest_common.sh@10 -- # set +x 00:23:19.750 ************************************ 00:23:19.750 START TEST nvmf_mdns_discovery 00:23:19.750 ************************************ 00:23:19.750 19:44:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:23:19.750 * Looking for test storage... 00:23:19.750 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:19.750 19:44:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:19.750 19:44:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:19.750 19:44:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:20.009 19:44:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:20.009 19:44:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:20.010 19:44:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:20.010 19:44:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:20.010 19:44:06 -- scripts/common.sh@335 -- # IFS=.-: 00:23:20.010 19:44:06 -- scripts/common.sh@335 -- # read -ra ver1 00:23:20.010 19:44:06 -- scripts/common.sh@336 -- # IFS=.-: 00:23:20.010 19:44:06 -- scripts/common.sh@336 -- # read -ra ver2 00:23:20.010 19:44:06 -- scripts/common.sh@337 -- # local 'op=<' 00:23:20.010 19:44:06 -- scripts/common.sh@339 -- # ver1_l=2 00:23:20.010 19:44:06 -- scripts/common.sh@340 -- # ver2_l=1 00:23:20.010 19:44:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:20.010 19:44:06 -- scripts/common.sh@343 -- # case "$op" in 00:23:20.010 19:44:06 -- scripts/common.sh@344 -- # : 1 00:23:20.010 19:44:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:20.010 19:44:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:20.010 19:44:06 -- scripts/common.sh@364 -- # decimal 1 00:23:20.010 19:44:06 -- scripts/common.sh@352 -- # local d=1 00:23:20.010 19:44:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:20.010 19:44:06 -- scripts/common.sh@354 -- # echo 1 00:23:20.010 19:44:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:20.010 19:44:06 -- scripts/common.sh@365 -- # decimal 2 00:23:20.010 19:44:06 -- scripts/common.sh@352 -- # local d=2 00:23:20.010 19:44:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:20.010 19:44:06 -- scripts/common.sh@354 -- # echo 2 00:23:20.010 19:44:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:20.010 19:44:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:20.010 19:44:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:20.010 19:44:06 -- scripts/common.sh@367 -- # return 0 00:23:20.010 19:44:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:20.010 19:44:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:20.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.010 --rc genhtml_branch_coverage=1 00:23:20.010 --rc genhtml_function_coverage=1 00:23:20.010 --rc genhtml_legend=1 00:23:20.010 --rc geninfo_all_blocks=1 00:23:20.010 --rc geninfo_unexecuted_blocks=1 00:23:20.010 00:23:20.010 ' 00:23:20.010 19:44:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:20.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.010 --rc genhtml_branch_coverage=1 00:23:20.010 --rc genhtml_function_coverage=1 00:23:20.010 --rc genhtml_legend=1 00:23:20.010 --rc geninfo_all_blocks=1 00:23:20.010 --rc geninfo_unexecuted_blocks=1 00:23:20.010 00:23:20.010 ' 00:23:20.010 19:44:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:20.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.010 --rc genhtml_branch_coverage=1 00:23:20.010 --rc genhtml_function_coverage=1 00:23:20.010 --rc genhtml_legend=1 00:23:20.010 --rc geninfo_all_blocks=1 00:23:20.010 --rc geninfo_unexecuted_blocks=1 00:23:20.010 00:23:20.010 ' 00:23:20.010 19:44:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:20.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.010 --rc genhtml_branch_coverage=1 00:23:20.010 --rc genhtml_function_coverage=1 00:23:20.010 --rc genhtml_legend=1 00:23:20.010 --rc geninfo_all_blocks=1 00:23:20.010 --rc geninfo_unexecuted_blocks=1 00:23:20.010 00:23:20.010 ' 00:23:20.010 19:44:06 -- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:20.010 19:44:06 -- nvmf/common.sh@7 -- # uname -s 00:23:20.010 19:44:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:20.010 19:44:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:20.010 19:44:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:20.010 19:44:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:20.010 19:44:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:20.010 19:44:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:20.010 19:44:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:20.010 19:44:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:20.010 19:44:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:20.010 19:44:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:20.010 19:44:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:23:20.010 19:44:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:23:20.010 19:44:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:20.010 19:44:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:20.010 19:44:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:20.010 19:44:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:20.010 19:44:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:20.010 19:44:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:20.010 19:44:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:20.010 19:44:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.010 19:44:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.010 19:44:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.010 19:44:06 -- paths/export.sh@5 -- # export PATH 00:23:20.010 19:44:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.010 19:44:06 -- nvmf/common.sh@46 -- # : 0 00:23:20.010 19:44:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:20.010 19:44:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:20.010 19:44:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:20.010 19:44:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:20.010 19:44:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:20.010 19:44:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:20.010 19:44:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:20.010 19:44:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:20.010 19:44:06 -- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address 00:23:20.010 19:44:06 -- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009 00:23:20.010 19:44:06 -- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:20.010 19:44:06 -- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:20.010 19:44:06 -- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:23:20.010 19:44:06 -- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:20.010 19:44:06 -- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock 00:23:20.010 19:44:06 -- host/mdns_discovery.sh@23 -- # nvmftestinit 00:23:20.010 19:44:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:20.010 19:44:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:20.010 19:44:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:20.010 19:44:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:20.010 19:44:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:20.010 19:44:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.010 19:44:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:20.010 19:44:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:20.010 19:44:06 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:20.010 19:44:06 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:20.010 19:44:06 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:20.010 19:44:06 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:20.010 19:44:06 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:20.010 19:44:06 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:20.010 19:44:06 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:20.010 19:44:06 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:20.010 19:44:06 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:20.010 19:44:06 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:20.010 19:44:06 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:20.010 19:44:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:20.010 19:44:06 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:20.010 19:44:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:20.010 19:44:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:20.010 19:44:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:20.010 19:44:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:20.010 19:44:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:20.010 19:44:06 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:20.010 19:44:06 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:20.010 Cannot find device "nvmf_tgt_br" 00:23:20.010 19:44:06 -- nvmf/common.sh@154 -- # true 00:23:20.010 19:44:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:20.010 Cannot find device "nvmf_tgt_br2" 00:23:20.010 19:44:06 -- nvmf/common.sh@155 -- # true 00:23:20.010 19:44:06 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:20.010 19:44:06 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:20.010 Cannot find device "nvmf_tgt_br" 00:23:20.010 19:44:06 -- nvmf/common.sh@157 -- # true 00:23:20.010 19:44:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:20.010 Cannot find device "nvmf_tgt_br2" 00:23:20.010 19:44:06 -- nvmf/common.sh@158 -- # true 00:23:20.010 19:44:06 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:20.010 19:44:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:20.010 19:44:06 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:20.011 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:20.011 19:44:06 -- nvmf/common.sh@161 -- # true 00:23:20.011 19:44:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:20.011 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:20.011 19:44:06 -- nvmf/common.sh@162 -- # true 00:23:20.011 19:44:06 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:20.011 19:44:06 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:20.011 19:44:06 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:20.011 19:44:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:20.269 19:44:06 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:20.269 19:44:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:20.269 19:44:06 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:20.269 19:44:06 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:20.269 19:44:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:20.269 19:44:06 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:20.269 19:44:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:20.269 19:44:06 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:20.269 19:44:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:20.269 19:44:06 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:20.269 19:44:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:20.269 19:44:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:20.269 19:44:06 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:20.269 19:44:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:20.269 19:44:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:20.269 19:44:07 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:20.269 19:44:07 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:20.269 19:44:07 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:20.269 19:44:07 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:20.269 19:44:07 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:20.269 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:20.269 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:23:20.269 00:23:20.269 --- 10.0.0.2 ping statistics --- 00:23:20.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.269 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:23:20.269 19:44:07 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:20.269 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:20.269 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:23:20.269 00:23:20.269 --- 10.0.0.3 ping statistics --- 00:23:20.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.269 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:23:20.269 19:44:07 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:20.269 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:20.269 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:23:20.269 00:23:20.269 --- 10.0.0.1 ping statistics --- 00:23:20.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.269 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:23:20.269 19:44:07 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:20.269 19:44:07 -- nvmf/common.sh@421 -- # return 0 00:23:20.269 19:44:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:20.269 19:44:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:20.269 19:44:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:20.269 19:44:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:20.269 19:44:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:20.269 19:44:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:20.269 19:44:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:20.269 19:44:07 -- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:20.269 19:44:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:20.270 19:44:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:20.270 19:44:07 -- common/autotest_common.sh@10 -- # set +x 00:23:20.270 19:44:07 -- nvmf/common.sh@469 -- # nvmfpid=98202 00:23:20.270 19:44:07 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:20.270 19:44:07 -- nvmf/common.sh@470 -- # waitforlisten 98202 00:23:20.270 19:44:07 -- common/autotest_common.sh@829 -- # '[' -z 98202 ']' 00:23:20.270 19:44:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.270 19:44:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:20.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.270 19:44:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.270 19:44:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:20.270 19:44:07 -- common/autotest_common.sh@10 -- # set +x 00:23:20.270 [2024-12-15 19:44:07.134507] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:20.270 [2024-12-15 19:44:07.134597] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:20.528 [2024-12-15 19:44:07.273457] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.528 [2024-12-15 19:44:07.376624] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:20.528 [2024-12-15 19:44:07.376793] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:20.528 [2024-12-15 19:44:07.376811] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:20.528 [2024-12-15 19:44:07.376843] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:20.528 [2024-12-15 19:44:07.376874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.463 19:44:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:21.463 19:44:08 -- common/autotest_common.sh@862 -- # return 0 00:23:21.463 19:44:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:21.463 19:44:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:21.463 19:44:08 -- common/autotest_common.sh@10 -- # set +x 00:23:21.463 19:44:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:21.463 19:44:08 -- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:23:21.463 19:44:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.463 19:44:08 -- common/autotest_common.sh@10 -- # set +x 00:23:21.463 19:44:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.463 19:44:08 -- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init 00:23:21.463 19:44:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.463 19:44:08 -- common/autotest_common.sh@10 -- # set +x 00:23:21.721 19:44:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.721 19:44:08 -- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:21.721 19:44:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.721 19:44:08 -- common/autotest_common.sh@10 -- # set +x 00:23:21.721 [2024-12-15 19:44:08.366155] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:21.721 19:44:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.721 19:44:08 -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:21.721 19:44:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.721 19:44:08 -- common/autotest_common.sh@10 -- # set +x 00:23:21.721 [2024-12-15 19:44:08.374367] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:21.721 19:44:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.721 19:44:08 -- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:21.721 19:44:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.721 19:44:08 -- common/autotest_common.sh@10 -- # set +x 00:23:21.721 null0 00:23:21.721 19:44:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.721 19:44:08 -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:21.721 19:44:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.721 19:44:08 -- common/autotest_common.sh@10 -- # set +x 00:23:21.721 null1 00:23:21.721 19:44:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.721 19:44:08 -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512 00:23:21.721 19:44:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.721 19:44:08 -- common/autotest_common.sh@10 -- # set +x 00:23:21.721 null2 00:23:21.721 19:44:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.721 19:44:08 -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512 00:23:21.721 19:44:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.721 19:44:08 -- common/autotest_common.sh@10 -- # set +x 00:23:21.721 null3 00:23:21.721 19:44:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.721 19:44:08 -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine 00:23:21.721 19:44:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.721 19:44:08 -- common/autotest_common.sh@10 -- # set +x 00:23:21.721 19:44:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.721 19:44:08 -- host/mdns_discovery.sh@47 -- # hostpid=98253 00:23:21.721 19:44:08 -- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:21.721 19:44:08 -- host/mdns_discovery.sh@48 -- # waitforlisten 98253 /tmp/host.sock 00:23:21.721 19:44:08 -- common/autotest_common.sh@829 -- # '[' -z 98253 ']' 00:23:21.721 19:44:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:23:21.721 19:44:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:21.721 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:21.721 19:44:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:21.721 19:44:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:21.721 19:44:08 -- common/autotest_common.sh@10 -- # set +x 00:23:21.721 [2024-12-15 19:44:08.476771] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:21.721 [2024-12-15 19:44:08.476910] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98253 ] 00:23:21.721 [2024-12-15 19:44:08.614397] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.979 [2024-12-15 19:44:08.713115] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:21.979 [2024-12-15 19:44:08.713286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.913 19:44:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:22.913 19:44:09 -- common/autotest_common.sh@862 -- # return 0 00:23:22.913 19:44:09 -- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:23:22.913 19:44:09 -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT 00:23:22.913 19:44:09 -- host/mdns_discovery.sh@55 -- # avahi-daemon --kill 00:23:22.913 19:44:09 -- host/mdns_discovery.sh@57 -- # avahipid=98282 00:23:22.913 19:44:09 -- host/mdns_discovery.sh@58 -- # sleep 1 00:23:22.913 19:44:09 -- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:23:22.913 19:44:09 -- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:23:22.913 Process 1060 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:23:22.913 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:23:22.913 Successfully dropped root privileges. 00:23:22.913 avahi-daemon 0.8 starting up. 00:23:22.913 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:23:22.913 Successfully called chroot(). 00:23:22.913 Successfully dropped remaining capabilities. 00:23:22.913 No service file found in /etc/avahi/services. 00:23:23.848 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:23:23.848 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:23:23.848 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:23:23.848 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:23:23.848 Network interface enumeration completed. 00:23:23.848 Registering new address record for fe80::348c:6dff:fe7d:df16 on nvmf_tgt_if2.*. 00:23:23.848 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:23:23.848 Registering new address record for fe80::3c47:c4ff:feac:c7a5 on nvmf_tgt_if.*. 00:23:23.848 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:23:23.848 Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 1001701637. 00:23:23.848 19:44:10 -- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:23.848 19:44:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.848 19:44:10 -- common/autotest_common.sh@10 -- # set +x 00:23:23.848 19:44:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.848 19:44:10 -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:23.848 19:44:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.848 19:44:10 -- common/autotest_common.sh@10 -- # set +x 00:23:23.848 19:44:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.848 19:44:10 -- host/mdns_discovery.sh@85 -- # notify_id=0 00:23:23.848 19:44:10 -- host/mdns_discovery.sh@91 -- # get_subsystem_names 00:23:23.848 19:44:10 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:23.848 19:44:10 -- host/mdns_discovery.sh@68 -- # sort 00:23:23.848 19:44:10 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:23.848 19:44:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.848 19:44:10 -- host/mdns_discovery.sh@68 -- # xargs 00:23:23.848 19:44:10 -- common/autotest_common.sh@10 -- # set +x 00:23:23.848 19:44:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.848 19:44:10 -- host/mdns_discovery.sh@91 -- # [[ '' == '' ]] 00:23:23.848 19:44:10 -- host/mdns_discovery.sh@92 -- # get_bdev_list 00:23:23.848 19:44:10 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:23.848 19:44:10 -- host/mdns_discovery.sh@64 -- # sort 00:23:23.848 19:44:10 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:23.848 19:44:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.848 19:44:10 -- common/autotest_common.sh@10 -- # set +x 00:23:23.848 19:44:10 -- host/mdns_discovery.sh@64 -- # xargs 00:23:23.848 19:44:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.848 19:44:10 -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:23:23.848 19:44:10 -- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:23.848 19:44:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.848 19:44:10 -- common/autotest_common.sh@10 -- # set +x 00:23:23.848 19:44:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.848 19:44:10 -- host/mdns_discovery.sh@95 -- # get_subsystem_names 00:23:23.848 19:44:10 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:23.848 19:44:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.848 19:44:10 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:23.848 19:44:10 -- common/autotest_common.sh@10 -- # set +x 00:23:23.848 19:44:10 -- host/mdns_discovery.sh@68 -- # xargs 00:23:23.848 19:44:10 -- host/mdns_discovery.sh@68 -- # sort 00:23:24.107 19:44:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.107 19:44:10 -- host/mdns_discovery.sh@95 -- # [[ '' == '' ]] 00:23:24.107 19:44:10 -- host/mdns_discovery.sh@96 -- # get_bdev_list 00:23:24.107 19:44:10 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:24.107 19:44:10 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:24.107 19:44:10 -- host/mdns_discovery.sh@64 -- # sort 00:23:24.107 19:44:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.107 19:44:10 -- common/autotest_common.sh@10 -- # set +x 00:23:24.107 19:44:10 -- host/mdns_discovery.sh@64 -- # xargs 00:23:24.107 19:44:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.107 19:44:10 -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:23:24.107 19:44:10 -- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:24.107 19:44:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.107 19:44:10 -- common/autotest_common.sh@10 -- # set +x 00:23:24.107 19:44:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.107 19:44:10 -- host/mdns_discovery.sh@99 -- # get_subsystem_names 00:23:24.107 19:44:10 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:24.107 19:44:10 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:24.107 19:44:10 -- host/mdns_discovery.sh@68 -- # sort 00:23:24.107 19:44:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.107 19:44:10 -- common/autotest_common.sh@10 -- # set +x 00:23:24.107 19:44:10 -- host/mdns_discovery.sh@68 -- # xargs 00:23:24.107 19:44:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.107 [2024-12-15 19:44:10.908251] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:23:24.107 19:44:10 -- host/mdns_discovery.sh@99 -- # [[ '' == '' ]] 00:23:24.107 19:44:10 -- host/mdns_discovery.sh@100 -- # get_bdev_list 00:23:24.107 19:44:10 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:24.107 19:44:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.107 19:44:10 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:24.107 19:44:10 -- host/mdns_discovery.sh@64 -- # sort 00:23:24.107 19:44:10 -- common/autotest_common.sh@10 -- # set +x 00:23:24.107 19:44:10 -- host/mdns_discovery.sh@64 -- # xargs 00:23:24.107 19:44:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.107 19:44:10 -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:23:24.107 19:44:10 -- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:24.107 19:44:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.107 19:44:10 -- common/autotest_common.sh@10 -- # set +x 00:23:24.107 [2024-12-15 19:44:10.975147] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:24.107 19:44:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.107 19:44:10 -- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:24.107 19:44:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.107 19:44:10 -- common/autotest_common.sh@10 -- # set +x 00:23:24.107 19:44:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.107 19:44:10 -- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:23:24.107 19:44:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.107 19:44:10 -- common/autotest_common.sh@10 -- # set +x 00:23:24.107 19:44:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.107 19:44:10 -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:23:24.107 19:44:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.107 19:44:10 -- common/autotest_common.sh@10 -- # set +x 00:23:24.365 19:44:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.365 19:44:11 -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:23:24.365 19:44:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.365 19:44:11 -- common/autotest_common.sh@10 -- # set +x 00:23:24.365 19:44:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.365 19:44:11 -- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:23:24.365 19:44:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.365 19:44:11 -- common/autotest_common.sh@10 -- # set +x 00:23:24.365 [2024-12-15 19:44:11.015061] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:23:24.365 19:44:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.365 19:44:11 -- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:23:24.365 19:44:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.365 19:44:11 -- common/autotest_common.sh@10 -- # set +x 00:23:24.365 [2024-12-15 19:44:11.023067] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:24.365 19:44:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.365 19:44:11 -- host/mdns_discovery.sh@124 -- # avahi_clientpid=98334 00:23:24.365 19:44:11 -- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp 00:23:24.365 19:44:11 -- host/mdns_discovery.sh@125 -- # sleep 5 00:23:24.932 [2024-12-15 19:44:11.808237] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:23:25.190 Established under name 'CDC' 00:23:25.448 [2024-12-15 19:44:12.208256] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:25.448 [2024-12-15 19:44:12.208281] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:23:25.448 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:25.448 cookie is 0 00:23:25.448 is_local: 1 00:23:25.448 our_own: 0 00:23:25.448 wide_area: 0 00:23:25.448 multicast: 1 00:23:25.448 cached: 1 00:23:25.448 [2024-12-15 19:44:12.308263] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:25.448 [2024-12-15 19:44:12.308284] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:23:25.448 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:25.448 cookie is 0 00:23:25.448 is_local: 1 00:23:25.448 our_own: 0 00:23:25.448 wide_area: 0 00:23:25.448 multicast: 1 00:23:25.448 cached: 1 00:23:26.382 [2024-12-15 19:44:13.220645] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:26.382 [2024-12-15 19:44:13.220687] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:26.382 [2024-12-15 19:44:13.220705] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:26.640 [2024-12-15 19:44:13.306882] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:23:26.640 [2024-12-15 19:44:13.320300] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:26.640 [2024-12-15 19:44:13.320482] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:26.640 [2024-12-15 19:44:13.320567] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:26.640 [2024-12-15 19:44:13.372944] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:26.640 [2024-12-15 19:44:13.373178] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:26.640 [2024-12-15 19:44:13.406445] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:23:26.640 [2024-12-15 19:44:13.461487] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:26.641 [2024-12-15 19:44:13.461672] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:29.170 19:44:16 -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:23:29.170 19:44:16 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:29.170 19:44:16 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:29.170 19:44:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.170 19:44:16 -- common/autotest_common.sh@10 -- # set +x 00:23:29.170 19:44:16 -- host/mdns_discovery.sh@80 -- # sort 00:23:29.170 19:44:16 -- host/mdns_discovery.sh@80 -- # xargs 00:23:29.170 19:44:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.428 19:44:16 -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:23:29.428 19:44:16 -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:23:29.428 19:44:16 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:29.428 19:44:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.428 19:44:16 -- common/autotest_common.sh@10 -- # set +x 00:23:29.428 19:44:16 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:29.428 19:44:16 -- host/mdns_discovery.sh@76 -- # sort 00:23:29.428 19:44:16 -- host/mdns_discovery.sh@76 -- # xargs 00:23:29.428 19:44:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.428 19:44:16 -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:29.428 19:44:16 -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:23:29.428 19:44:16 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:29.428 19:44:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.429 19:44:16 -- common/autotest_common.sh@10 -- # set +x 00:23:29.429 19:44:16 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:29.429 19:44:16 -- host/mdns_discovery.sh@68 -- # sort 00:23:29.429 19:44:16 -- host/mdns_discovery.sh@68 -- # xargs 00:23:29.429 19:44:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.429 19:44:16 -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:29.429 19:44:16 -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:23:29.429 19:44:16 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:29.429 19:44:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.429 19:44:16 -- common/autotest_common.sh@10 -- # set +x 00:23:29.429 19:44:16 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:29.429 19:44:16 -- host/mdns_discovery.sh@64 -- # sort 00:23:29.429 19:44:16 -- host/mdns_discovery.sh@64 -- # xargs 00:23:29.429 19:44:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.429 19:44:16 -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:23:29.429 19:44:16 -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:23:29.429 19:44:16 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:29.429 19:44:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.429 19:44:16 -- common/autotest_common.sh@10 -- # set +x 00:23:29.429 19:44:16 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:29.429 19:44:16 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:29.429 19:44:16 -- host/mdns_discovery.sh@72 -- # xargs 00:23:29.429 19:44:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.429 19:44:16 -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:23:29.429 19:44:16 -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:23:29.429 19:44:16 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:29.429 19:44:16 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:29.429 19:44:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.429 19:44:16 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:29.429 19:44:16 -- common/autotest_common.sh@10 -- # set +x 00:23:29.429 19:44:16 -- host/mdns_discovery.sh@72 -- # xargs 00:23:29.429 19:44:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.687 19:44:16 -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:23:29.687 19:44:16 -- host/mdns_discovery.sh@133 -- # get_notification_count 00:23:29.687 19:44:16 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:29.687 19:44:16 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:29.687 19:44:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.687 19:44:16 -- common/autotest_common.sh@10 -- # set +x 00:23:29.687 19:44:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.687 19:44:16 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:23:29.688 19:44:16 -- host/mdns_discovery.sh@88 -- # notify_id=2 00:23:29.688 19:44:16 -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:23:29.688 19:44:16 -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:29.688 19:44:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.688 19:44:16 -- common/autotest_common.sh@10 -- # set +x 00:23:29.688 19:44:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.688 19:44:16 -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:23:29.688 19:44:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.688 19:44:16 -- common/autotest_common.sh@10 -- # set +x 00:23:29.688 19:44:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.688 19:44:16 -- host/mdns_discovery.sh@139 -- # sleep 1 00:23:30.624 19:44:17 -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:23:30.624 19:44:17 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:30.624 19:44:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.624 19:44:17 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:30.624 19:44:17 -- common/autotest_common.sh@10 -- # set +x 00:23:30.624 19:44:17 -- host/mdns_discovery.sh@64 -- # sort 00:23:30.624 19:44:17 -- host/mdns_discovery.sh@64 -- # xargs 00:23:30.624 19:44:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.624 19:44:17 -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:30.624 19:44:17 -- host/mdns_discovery.sh@142 -- # get_notification_count 00:23:30.624 19:44:17 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:30.624 19:44:17 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:30.624 19:44:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.624 19:44:17 -- common/autotest_common.sh@10 -- # set +x 00:23:30.624 19:44:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.884 19:44:17 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:23:30.884 19:44:17 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:30.884 19:44:17 -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:23:30.884 19:44:17 -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:30.884 19:44:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.884 19:44:17 -- common/autotest_common.sh@10 -- # set +x 00:23:30.884 [2024-12-15 19:44:17.533901] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:30.884 [2024-12-15 19:44:17.534750] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:30.884 [2024-12-15 19:44:17.534786] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:30.884 [2024-12-15 19:44:17.534838] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:30.884 [2024-12-15 19:44:17.534855] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:30.884 19:44:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.884 19:44:17 -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:23:30.884 19:44:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.884 19:44:17 -- common/autotest_common.sh@10 -- # set +x 00:23:30.884 [2024-12-15 19:44:17.541754] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:23:30.884 [2024-12-15 19:44:17.542715] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:30.884 [2024-12-15 19:44:17.542795] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:30.884 19:44:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.884 19:44:17 -- host/mdns_discovery.sh@149 -- # sleep 1 00:23:30.884 [2024-12-15 19:44:17.673794] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:23:30.884 [2024-12-15 19:44:17.674289] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:23:30.884 [2024-12-15 19:44:17.733359] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:30.884 [2024-12-15 19:44:17.733532] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:30.884 [2024-12-15 19:44:17.733655] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:30.884 [2024-12-15 19:44:17.733787] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:30.885 [2024-12-15 19:44:17.733980] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:30.885 [2024-12-15 19:44:17.734050] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:30.885 [2024-12-15 19:44:17.734196] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:30.885 [2024-12-15 19:44:17.734492] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:30.885 [2024-12-15 19:44:17.778962] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:31.151 [2024-12-15 19:44:17.779148] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:31.151 [2024-12-15 19:44:17.779979] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:31.151 [2024-12-15 19:44:17.780156] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:31.718 19:44:18 -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:23:31.718 19:44:18 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:31.718 19:44:18 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:31.718 19:44:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.718 19:44:18 -- host/mdns_discovery.sh@68 -- # xargs 00:23:31.718 19:44:18 -- host/mdns_discovery.sh@68 -- # sort 00:23:31.718 19:44:18 -- common/autotest_common.sh@10 -- # set +x 00:23:31.718 19:44:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.718 19:44:18 -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:31.977 19:44:18 -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:23:31.977 19:44:18 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:31.977 19:44:18 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:31.977 19:44:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.977 19:44:18 -- common/autotest_common.sh@10 -- # set +x 00:23:31.977 19:44:18 -- host/mdns_discovery.sh@64 -- # sort 00:23:31.977 19:44:18 -- host/mdns_discovery.sh@64 -- # xargs 00:23:31.977 19:44:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.977 19:44:18 -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:31.977 19:44:18 -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:23:31.977 19:44:18 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:31.977 19:44:18 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:31.977 19:44:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.977 19:44:18 -- common/autotest_common.sh@10 -- # set +x 00:23:31.977 19:44:18 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:31.977 19:44:18 -- host/mdns_discovery.sh@72 -- # xargs 00:23:31.977 19:44:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.977 19:44:18 -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:31.977 19:44:18 -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:23:31.977 19:44:18 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:31.977 19:44:18 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:31.977 19:44:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.977 19:44:18 -- common/autotest_common.sh@10 -- # set +x 00:23:31.977 19:44:18 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:31.977 19:44:18 -- host/mdns_discovery.sh@72 -- # xargs 00:23:31.977 19:44:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.977 19:44:18 -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:31.977 19:44:18 -- host/mdns_discovery.sh@155 -- # get_notification_count 00:23:31.977 19:44:18 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:31.977 19:44:18 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:31.977 19:44:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.977 19:44:18 -- common/autotest_common.sh@10 -- # set +x 00:23:31.977 19:44:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.977 19:44:18 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:23:31.977 19:44:18 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:31.977 19:44:18 -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:23:31.977 19:44:18 -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:31.977 19:44:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.977 19:44:18 -- common/autotest_common.sh@10 -- # set +x 00:23:31.977 [2024-12-15 19:44:18.862465] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:31.977 [2024-12-15 19:44:18.862499] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:31.977 [2024-12-15 19:44:18.862538] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:31.977 [2024-12-15 19:44:18.862553] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:31.977 19:44:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.977 19:44:18 -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:23:31.977 19:44:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.977 19:44:18 -- common/autotest_common.sh@10 -- # set +x 00:23:31.977 [2024-12-15 19:44:18.871359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.238 [2024-12-15 19:44:18.871590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.238 [2024-12-15 19:44:18.871749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.238 [2024-12-15 19:44:18.871921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.238 [2024-12-15 19:44:18.872143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.238 [2024-12-15 19:44:18.872361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.238 [2024-12-15 19:44:18.872569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.238 [2024-12-15 19:44:18.872767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.238 [2024-12-15 19:44:18.872921] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdefce0 is same with the state(5) to be set 00:23:32.238 [2024-12-15 19:44:18.875208] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:32.238 [2024-12-15 19:44:18.875429] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:32.238 19:44:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.238 19:44:18 -- host/mdns_discovery.sh@162 -- # sleep 1 00:23:32.238 [2024-12-15 19:44:18.881359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.238 [2024-12-15 19:44:18.881398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.238 [2024-12-15 19:44:18.881413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.238 [2024-12-15 19:44:18.881422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.238 [2024-12-15 19:44:18.881433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.238 [2024-12-15 19:44:18.881443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.238 [2024-12-15 19:44:18.881453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.238 [2024-12-15 19:44:18.881462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.238 [2024-12-15 19:44:18.881471] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9d070 is same with the state(5) to be set 00:23:32.238 [2024-12-15 19:44:18.881497] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdefce0 (9): Bad file descriptor 00:23:32.238 [2024-12-15 19:44:18.891302] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd9d070 (9): Bad file descriptor 00:23:32.238 [2024-12-15 19:44:18.891372] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:32.238 [2024-12-15 19:44:18.891461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.238 [2024-12-15 19:44:18.891513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.238 [2024-12-15 19:44:18.891532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdefce0 with addr=10.0.0.2, port=4420 00:23:32.238 [2024-12-15 19:44:18.891543] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdefce0 is same with the state(5) to be set 00:23:32.238 [2024-12-15 19:44:18.891560] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdefce0 (9): Bad file descriptor 00:23:32.238 [2024-12-15 19:44:18.891575] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:32.238 [2024-12-15 19:44:18.891584] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:32.238 [2024-12-15 19:44:18.891595] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:32.238 [2024-12-15 19:44:18.891610] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:32.238 [2024-12-15 19:44:18.901312] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:32.238 [2024-12-15 19:44:18.901410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.238 [2024-12-15 19:44:18.901459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.238 [2024-12-15 19:44:18.901478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9d070 with addr=10.0.0.3, port=4420 00:23:32.238 [2024-12-15 19:44:18.901489] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9d070 is same with the state(5) to be set 00:23:32.238 [2024-12-15 19:44:18.901506] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd9d070 (9): Bad file descriptor 00:23:32.238 [2024-12-15 19:44:18.901531] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:32.238 [2024-12-15 19:44:18.901543] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:32.238 [2024-12-15 19:44:18.901558] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:32.238 [2024-12-15 19:44:18.901572] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:32.238 [2024-12-15 19:44:18.901585] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:32.238 [2024-12-15 19:44:18.901640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.238 [2024-12-15 19:44:18.901688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.238 [2024-12-15 19:44:18.901706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdefce0 with addr=10.0.0.2, port=4420 00:23:32.238 [2024-12-15 19:44:18.901718] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdefce0 is same with the state(5) to be set 00:23:32.238 [2024-12-15 19:44:18.901734] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdefce0 (9): Bad file descriptor 00:23:32.238 [2024-12-15 19:44:18.901748] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:32.238 [2024-12-15 19:44:18.901757] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:32.238 [2024-12-15 19:44:18.901767] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:32.238 [2024-12-15 19:44:18.901782] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:32.238 [2024-12-15 19:44:18.911379] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:32.238 [2024-12-15 19:44:18.911462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.238 [2024-12-15 19:44:18.911511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.238 [2024-12-15 19:44:18.911529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9d070 with addr=10.0.0.3, port=4420 00:23:32.238 [2024-12-15 19:44:18.911541] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9d070 is same with the state(5) to be set 00:23:32.238 [2024-12-15 19:44:18.911557] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd9d070 (9): Bad file descriptor 00:23:32.238 [2024-12-15 19:44:18.911573] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:32.238 [2024-12-15 19:44:18.911583] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:32.238 [2024-12-15 19:44:18.911592] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:32.238 [2024-12-15 19:44:18.911606] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:32.238 [2024-12-15 19:44:18.911629] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:32.238 [2024-12-15 19:44:18.911687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.238 [2024-12-15 19:44:18.911734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.238 [2024-12-15 19:44:18.911752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdefce0 with addr=10.0.0.2, port=4420 00:23:32.239 [2024-12-15 19:44:18.911763] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdefce0 is same with the state(5) to be set 00:23:32.239 [2024-12-15 19:44:18.911778] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdefce0 (9): Bad file descriptor 00:23:32.239 [2024-12-15 19:44:18.911792] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:32.239 [2024-12-15 19:44:18.911801] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:32.239 [2024-12-15 19:44:18.911810] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:32.239 [2024-12-15 19:44:18.911842] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:32.239 [2024-12-15 19:44:18.921432] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:32.239 [2024-12-15 19:44:18.921683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.239 [2024-12-15 19:44:18.921738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.239 [2024-12-15 19:44:18.921758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9d070 with addr=10.0.0.3, port=4420 00:23:32.239 [2024-12-15 19:44:18.921770] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9d070 is same with the state(5) to be set 00:23:32.239 [2024-12-15 19:44:18.921790] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd9d070 (9): Bad file descriptor 00:23:32.239 [2024-12-15 19:44:18.921840] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:32.239 [2024-12-15 19:44:18.921857] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:32.239 [2024-12-15 19:44:18.921867] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:32.239 [2024-12-15 19:44:18.921883] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:32.239 [2024-12-15 19:44:18.921898] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:32.239 [2024-12-15 19:44:18.921975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.239 [2024-12-15 19:44:18.922024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.239 [2024-12-15 19:44:18.922043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdefce0 with addr=10.0.0.2, port=4420 00:23:32.239 [2024-12-15 19:44:18.922054] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdefce0 is same with the state(5) to be set 00:23:32.239 [2024-12-15 19:44:18.922070] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdefce0 (9): Bad file descriptor 00:23:32.239 [2024-12-15 19:44:18.922085] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:32.239 [2024-12-15 19:44:18.922095] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:32.239 [2024-12-15 19:44:18.922110] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:32.239 [2024-12-15 19:44:18.922140] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:32.239 [2024-12-15 19:44:18.931639] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:32.239 [2024-12-15 19:44:18.931724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.239 [2024-12-15 19:44:18.931772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.239 [2024-12-15 19:44:18.931790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9d070 with addr=10.0.0.3, port=4420 00:23:32.239 [2024-12-15 19:44:18.931801] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9d070 is same with the state(5) to be set 00:23:32.239 [2024-12-15 19:44:18.931835] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd9d070 (9): Bad file descriptor 00:23:32.239 [2024-12-15 19:44:18.931857] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:32.239 [2024-12-15 19:44:18.931867] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:32.239 [2024-12-15 19:44:18.931876] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:32.239 [2024-12-15 19:44:18.931890] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:32.239 [2024-12-15 19:44:18.931947] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:32.239 [2024-12-15 19:44:18.932026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.239 [2024-12-15 19:44:18.932094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.239 [2024-12-15 19:44:18.932113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdefce0 with addr=10.0.0.2, port=4420 00:23:32.239 [2024-12-15 19:44:18.932124] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdefce0 is same with the state(5) to be set 00:23:32.239 [2024-12-15 19:44:18.932142] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdefce0 (9): Bad file descriptor 00:23:32.239 [2024-12-15 19:44:18.932157] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:32.239 [2024-12-15 19:44:18.932166] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:32.239 [2024-12-15 19:44:18.932176] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:32.239 [2024-12-15 19:44:18.932191] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:32.239 [2024-12-15 19:44:18.941691] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:32.239 [2024-12-15 19:44:18.941774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.239 [2024-12-15 19:44:18.941843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.239 [2024-12-15 19:44:18.941864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9d070 with addr=10.0.0.3, port=4420 00:23:32.239 [2024-12-15 19:44:18.941876] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9d070 is same with the state(5) to be set 00:23:32.239 [2024-12-15 19:44:18.941892] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd9d070 (9): Bad file descriptor 00:23:32.239 [2024-12-15 19:44:18.941906] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:32.239 [2024-12-15 19:44:18.941916] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:32.239 [2024-12-15 19:44:18.941925] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:32.239 [2024-12-15 19:44:18.941939] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:32.239 [2024-12-15 19:44:18.941996] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:32.239 [2024-12-15 19:44:18.942057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.239 [2024-12-15 19:44:18.942105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.239 [2024-12-15 19:44:18.942124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdefce0 with addr=10.0.0.2, port=4420 00:23:32.239 [2024-12-15 19:44:18.942135] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdefce0 is same with the state(5) to be set 00:23:32.239 [2024-12-15 19:44:18.942152] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdefce0 (9): Bad file descriptor 00:23:32.239 [2024-12-15 19:44:18.942166] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:32.239 [2024-12-15 19:44:18.942176] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:32.239 [2024-12-15 19:44:18.942185] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:32.239 [2024-12-15 19:44:18.942216] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:32.239 [2024-12-15 19:44:18.951744] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:32.239 [2024-12-15 19:44:18.952023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.239 [2024-12-15 19:44:18.952078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.239 [2024-12-15 19:44:18.952099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9d070 with addr=10.0.0.3, port=4420 00:23:32.239 [2024-12-15 19:44:18.952111] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9d070 is same with the state(5) to be set 00:23:32.239 [2024-12-15 19:44:18.952130] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd9d070 (9): Bad file descriptor 00:23:32.239 [2024-12-15 19:44:18.952161] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:32.239 [2024-12-15 19:44:18.952176] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:32.239 [2024-12-15 19:44:18.952186] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:32.239 [2024-12-15 19:44:18.952201] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:32.239 [2024-12-15 19:44:18.952215] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:32.239 [2024-12-15 19:44:18.952290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.239 [2024-12-15 19:44:18.952340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.239 [2024-12-15 19:44:18.952359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdefce0 with addr=10.0.0.2, port=4420 00:23:32.239 [2024-12-15 19:44:18.952371] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdefce0 is same with the state(5) to be set 00:23:32.239 [2024-12-15 19:44:18.952388] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdefce0 (9): Bad file descriptor 00:23:32.239 [2024-12-15 19:44:18.952423] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:32.239 [2024-12-15 19:44:18.952453] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:32.239 [2024-12-15 19:44:18.952463] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:32.239 [2024-12-15 19:44:18.952478] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:32.239 [2024-12-15 19:44:18.961980] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:32.239 [2024-12-15 19:44:18.962067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.239 [2024-12-15 19:44:18.962116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.239 [2024-12-15 19:44:18.962135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9d070 with addr=10.0.0.3, port=4420 00:23:32.239 [2024-12-15 19:44:18.962147] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9d070 is same with the state(5) to be set 00:23:32.239 [2024-12-15 19:44:18.962163] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd9d070 (9): Bad file descriptor 00:23:32.239 [2024-12-15 19:44:18.962179] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:32.239 [2024-12-15 19:44:18.962188] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:32.239 [2024-12-15 19:44:18.962197] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:32.239 [2024-12-15 19:44:18.962211] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:32.240 [2024-12-15 19:44:18.962243] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:32.240 [2024-12-15 19:44:18.962301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.240 [2024-12-15 19:44:18.962357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.240 [2024-12-15 19:44:18.962377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdefce0 with addr=10.0.0.2, port=4420 00:23:32.240 [2024-12-15 19:44:18.962388] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdefce0 is same with the state(5) to be set 00:23:32.240 [2024-12-15 19:44:18.962404] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdefce0 (9): Bad file descriptor 00:23:32.240 [2024-12-15 19:44:18.962464] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:32.240 [2024-12-15 19:44:18.962479] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:32.240 [2024-12-15 19:44:18.962488] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:32.240 [2024-12-15 19:44:18.962503] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:32.240 [2024-12-15 19:44:18.972036] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:32.240 [2024-12-15 19:44:18.972123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.240 [2024-12-15 19:44:18.972172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.240 [2024-12-15 19:44:18.972190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9d070 with addr=10.0.0.3, port=4420 00:23:32.240 [2024-12-15 19:44:18.972201] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9d070 is same with the state(5) to be set 00:23:32.240 [2024-12-15 19:44:18.972217] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd9d070 (9): Bad file descriptor 00:23:32.240 [2024-12-15 19:44:18.972232] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:32.240 [2024-12-15 19:44:18.972240] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:32.240 [2024-12-15 19:44:18.972249] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:32.240 [2024-12-15 19:44:18.972265] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:32.240 [2024-12-15 19:44:18.972289] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:32.240 [2024-12-15 19:44:18.972348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.240 [2024-12-15 19:44:18.972395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.240 [2024-12-15 19:44:18.972414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdefce0 with addr=10.0.0.2, port=4420 00:23:32.240 [2024-12-15 19:44:18.972425] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdefce0 is same with the state(5) to be set 00:23:32.240 [2024-12-15 19:44:18.972441] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdefce0 (9): Bad file descriptor 00:23:32.240 [2024-12-15 19:44:18.972471] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:32.240 [2024-12-15 19:44:18.972484] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:32.240 [2024-12-15 19:44:18.972493] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:32.240 [2024-12-15 19:44:18.972507] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:32.240 [2024-12-15 19:44:18.982090] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:32.240 [2024-12-15 19:44:18.982335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.240 [2024-12-15 19:44:18.982402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.240 [2024-12-15 19:44:18.982423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9d070 with addr=10.0.0.3, port=4420 00:23:32.240 [2024-12-15 19:44:18.982436] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9d070 is same with the state(5) to be set 00:23:32.240 [2024-12-15 19:44:18.982455] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd9d070 (9): Bad file descriptor 00:23:32.240 [2024-12-15 19:44:18.982501] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:32.240 [2024-12-15 19:44:18.982515] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:32.240 [2024-12-15 19:44:18.982525] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:32.240 [2024-12-15 19:44:18.982540] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:32.240 [2024-12-15 19:44:18.982553] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:32.240 [2024-12-15 19:44:18.982627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.240 [2024-12-15 19:44:18.982676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.240 [2024-12-15 19:44:18.982695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdefce0 with addr=10.0.0.2, port=4420 00:23:32.240 [2024-12-15 19:44:18.982706] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdefce0 is same with the state(5) to be set 00:23:32.240 [2024-12-15 19:44:18.982722] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdefce0 (9): Bad file descriptor 00:23:32.240 [2024-12-15 19:44:18.982737] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:32.240 [2024-12-15 19:44:18.982747] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:32.240 [2024-12-15 19:44:18.982767] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:32.240 [2024-12-15 19:44:18.982782] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:32.240 [2024-12-15 19:44:18.992295] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:32.240 [2024-12-15 19:44:18.992561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.240 [2024-12-15 19:44:18.992736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.240 [2024-12-15 19:44:18.992796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9d070 with addr=10.0.0.3, port=4420 00:23:32.240 [2024-12-15 19:44:18.992923] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9d070 is same with the state(5) to be set 00:23:32.240 [2024-12-15 19:44:18.992974] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd9d070 (9): Bad file descriptor 00:23:32.240 [2024-12-15 19:44:18.993006] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:32.240 [2024-12-15 19:44:18.993019] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:32.240 [2024-12-15 19:44:18.993029] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:32.240 [2024-12-15 19:44:18.993044] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:32.240 [2024-12-15 19:44:18.993060] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:32.240 [2024-12-15 19:44:18.993126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.240 [2024-12-15 19:44:18.993179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.240 [2024-12-15 19:44:18.993199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdefce0 with addr=10.0.0.2, port=4420 00:23:32.240 [2024-12-15 19:44:18.993210] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdefce0 is same with the state(5) to be set 00:23:32.240 [2024-12-15 19:44:18.993227] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdefce0 (9): Bad file descriptor 00:23:32.240 [2024-12-15 19:44:18.993256] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:32.240 [2024-12-15 19:44:18.993269] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:32.240 [2024-12-15 19:44:18.993279] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:32.240 [2024-12-15 19:44:18.993295] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:32.240 [2024-12-15 19:44:19.002518] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:32.240 [2024-12-15 19:44:19.002620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.240 [2024-12-15 19:44:19.002669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.240 [2024-12-15 19:44:19.002688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9d070 with addr=10.0.0.3, port=4420 00:23:32.240 [2024-12-15 19:44:19.002699] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9d070 is same with the state(5) to be set 00:23:32.240 [2024-12-15 19:44:19.002715] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd9d070 (9): Bad file descriptor 00:23:32.240 [2024-12-15 19:44:19.002730] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:32.240 [2024-12-15 19:44:19.002739] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:32.240 [2024-12-15 19:44:19.002748] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:32.240 [2024-12-15 19:44:19.002763] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:32.240 [2024-12-15 19:44:19.003090] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:32.240 [2024-12-15 19:44:19.003157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.240 [2024-12-15 19:44:19.003222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.240 [2024-12-15 19:44:19.003257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdefce0 with addr=10.0.0.2, port=4420 00:23:32.240 [2024-12-15 19:44:19.003269] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdefce0 is same with the state(5) to be set 00:23:32.240 [2024-12-15 19:44:19.003286] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdefce0 (9): Bad file descriptor 00:23:32.240 [2024-12-15 19:44:19.003311] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:32.240 [2024-12-15 19:44:19.003321] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:32.240 [2024-12-15 19:44:19.003330] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:32.240 [2024-12-15 19:44:19.003346] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:32.240 [2024-12-15 19:44:19.005544] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:32.240 [2024-12-15 19:44:19.005571] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:32.240 [2024-12-15 19:44:19.005591] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:32.240 [2024-12-15 19:44:19.005624] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:23:32.241 [2024-12-15 19:44:19.005640] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:32.241 [2024-12-15 19:44:19.005653] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:32.241 [2024-12-15 19:44:19.091611] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:32.241 [2024-12-15 19:44:19.091878] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:33.175 19:44:19 -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:23:33.175 19:44:19 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:33.175 19:44:19 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:33.175 19:44:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.175 19:44:19 -- host/mdns_discovery.sh@68 -- # sort 00:23:33.175 19:44:19 -- common/autotest_common.sh@10 -- # set +x 00:23:33.175 19:44:19 -- host/mdns_discovery.sh@68 -- # xargs 00:23:33.175 19:44:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.175 19:44:19 -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:33.175 19:44:19 -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:23:33.175 19:44:19 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:33.175 19:44:19 -- host/mdns_discovery.sh@64 -- # sort 00:23:33.175 19:44:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.175 19:44:19 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:33.175 19:44:19 -- common/autotest_common.sh@10 -- # set +x 00:23:33.175 19:44:19 -- host/mdns_discovery.sh@64 -- # xargs 00:23:33.175 19:44:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.175 19:44:20 -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:33.175 19:44:20 -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:23:33.175 19:44:20 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:33.175 19:44:20 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:33.175 19:44:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.175 19:44:20 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:33.175 19:44:20 -- common/autotest_common.sh@10 -- # set +x 00:23:33.175 19:44:20 -- host/mdns_discovery.sh@72 -- # xargs 00:23:33.175 19:44:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.175 19:44:20 -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:23:33.175 19:44:20 -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:23:33.175 19:44:20 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:33.175 19:44:20 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:33.175 19:44:20 -- host/mdns_discovery.sh@72 -- # xargs 00:23:33.175 19:44:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.175 19:44:20 -- common/autotest_common.sh@10 -- # set +x 00:23:33.175 19:44:20 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:33.433 19:44:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.433 19:44:20 -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:23:33.433 19:44:20 -- host/mdns_discovery.sh@168 -- # get_notification_count 00:23:33.433 19:44:20 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:33.433 19:44:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.433 19:44:20 -- common/autotest_common.sh@10 -- # set +x 00:23:33.433 19:44:20 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:33.433 19:44:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.433 19:44:20 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:23:33.433 19:44:20 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:33.433 19:44:20 -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:23:33.433 19:44:20 -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:33.433 19:44:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.433 19:44:20 -- common/autotest_common.sh@10 -- # set +x 00:23:33.433 19:44:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.433 19:44:20 -- host/mdns_discovery.sh@172 -- # sleep 1 00:23:33.433 [2024-12-15 19:44:20.208306] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:34.368 19:44:21 -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:23:34.368 19:44:21 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:34.368 19:44:21 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:34.368 19:44:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.368 19:44:21 -- host/mdns_discovery.sh@80 -- # sort 00:23:34.368 19:44:21 -- common/autotest_common.sh@10 -- # set +x 00:23:34.368 19:44:21 -- host/mdns_discovery.sh@80 -- # xargs 00:23:34.368 19:44:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.368 19:44:21 -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:23:34.627 19:44:21 -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:23:34.627 19:44:21 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:34.627 19:44:21 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:34.627 19:44:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.627 19:44:21 -- host/mdns_discovery.sh@68 -- # sort 00:23:34.627 19:44:21 -- common/autotest_common.sh@10 -- # set +x 00:23:34.627 19:44:21 -- host/mdns_discovery.sh@68 -- # xargs 00:23:34.627 19:44:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.627 19:44:21 -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:23:34.627 19:44:21 -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:23:34.627 19:44:21 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:34.627 19:44:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.627 19:44:21 -- common/autotest_common.sh@10 -- # set +x 00:23:34.627 19:44:21 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:34.627 19:44:21 -- host/mdns_discovery.sh@64 -- # sort 00:23:34.627 19:44:21 -- host/mdns_discovery.sh@64 -- # xargs 00:23:34.627 19:44:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.627 19:44:21 -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:23:34.627 19:44:21 -- host/mdns_discovery.sh@177 -- # get_notification_count 00:23:34.627 19:44:21 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:34.627 19:44:21 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:34.627 19:44:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.627 19:44:21 -- common/autotest_common.sh@10 -- # set +x 00:23:34.627 19:44:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.627 19:44:21 -- host/mdns_discovery.sh@87 -- # notification_count=4 00:23:34.627 19:44:21 -- host/mdns_discovery.sh@88 -- # notify_id=8 00:23:34.627 19:44:21 -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:23:34.627 19:44:21 -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:34.627 19:44:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.627 19:44:21 -- common/autotest_common.sh@10 -- # set +x 00:23:34.627 19:44:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.627 19:44:21 -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:34.627 19:44:21 -- common/autotest_common.sh@650 -- # local es=0 00:23:34.627 19:44:21 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:34.627 19:44:21 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:34.627 19:44:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:34.627 19:44:21 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:34.627 19:44:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:34.627 19:44:21 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:34.627 19:44:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.627 19:44:21 -- common/autotest_common.sh@10 -- # set +x 00:23:34.627 [2024-12-15 19:44:21.433851] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:23:34.627 2024/12/15 19:44:21 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:34.627 request: 00:23:34.627 { 00:23:34.627 "method": "bdev_nvme_start_mdns_discovery", 00:23:34.627 "params": { 00:23:34.627 "name": "mdns", 00:23:34.627 "svcname": "_nvme-disc._http", 00:23:34.627 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:34.627 } 00:23:34.627 } 00:23:34.627 Got JSON-RPC error response 00:23:34.627 GoRPCClient: error on JSON-RPC call 00:23:34.627 19:44:21 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:34.627 19:44:21 -- common/autotest_common.sh@653 -- # es=1 00:23:34.627 19:44:21 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:34.627 19:44:21 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:34.627 19:44:21 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:34.627 19:44:21 -- host/mdns_discovery.sh@183 -- # sleep 5 00:23:35.195 [2024-12-15 19:44:21.822535] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:23:35.195 [2024-12-15 19:44:21.922532] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:23:35.195 [2024-12-15 19:44:22.022540] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:35.195 [2024-12-15 19:44:22.022565] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:23:35.195 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:35.195 cookie is 0 00:23:35.195 is_local: 1 00:23:35.195 our_own: 0 00:23:35.195 wide_area: 0 00:23:35.195 multicast: 1 00:23:35.195 cached: 1 00:23:35.452 [2024-12-15 19:44:22.122538] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:35.453 [2024-12-15 19:44:22.122564] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:23:35.453 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:35.453 cookie is 0 00:23:35.453 is_local: 1 00:23:35.453 our_own: 0 00:23:35.453 wide_area: 0 00:23:35.453 multicast: 1 00:23:35.453 cached: 1 00:23:36.388 [2024-12-15 19:44:23.032094] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:36.388 [2024-12-15 19:44:23.032125] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:36.388 [2024-12-15 19:44:23.032144] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:36.388 [2024-12-15 19:44:23.118180] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:23:36.388 [2024-12-15 19:44:23.131889] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:36.388 [2024-12-15 19:44:23.131912] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:36.388 [2024-12-15 19:44:23.131928] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:36.388 [2024-12-15 19:44:23.184145] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:36.388 [2024-12-15 19:44:23.184175] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:36.388 [2024-12-15 19:44:23.218269] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:23:36.388 [2024-12-15 19:44:23.277099] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:36.388 [2024-12-15 19:44:23.277128] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:39.673 19:44:26 -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:23:39.673 19:44:26 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:39.673 19:44:26 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:39.673 19:44:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.673 19:44:26 -- host/mdns_discovery.sh@80 -- # sort 00:23:39.673 19:44:26 -- host/mdns_discovery.sh@80 -- # xargs 00:23:39.673 19:44:26 -- common/autotest_common.sh@10 -- # set +x 00:23:39.673 19:44:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.673 19:44:26 -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:23:39.673 19:44:26 -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:23:39.673 19:44:26 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:39.673 19:44:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.673 19:44:26 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:39.673 19:44:26 -- host/mdns_discovery.sh@76 -- # xargs 00:23:39.673 19:44:26 -- common/autotest_common.sh@10 -- # set +x 00:23:39.673 19:44:26 -- host/mdns_discovery.sh@76 -- # sort 00:23:39.673 19:44:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.673 19:44:26 -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:39.673 19:44:26 -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:23:39.673 19:44:26 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:39.674 19:44:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.674 19:44:26 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:39.674 19:44:26 -- common/autotest_common.sh@10 -- # set +x 00:23:39.674 19:44:26 -- host/mdns_discovery.sh@64 -- # sort 00:23:39.674 19:44:26 -- host/mdns_discovery.sh@64 -- # xargs 00:23:39.932 19:44:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.932 19:44:26 -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:39.932 19:44:26 -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:39.932 19:44:26 -- common/autotest_common.sh@650 -- # local es=0 00:23:39.932 19:44:26 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:39.932 19:44:26 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:39.932 19:44:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:39.932 19:44:26 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:39.932 19:44:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:39.932 19:44:26 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:39.932 19:44:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.932 19:44:26 -- common/autotest_common.sh@10 -- # set +x 00:23:39.932 [2024-12-15 19:44:26.618342] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:23:39.932 2024/12/15 19:44:26 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:39.932 request: 00:23:39.932 { 00:23:39.932 "method": "bdev_nvme_start_mdns_discovery", 00:23:39.932 "params": { 00:23:39.932 "name": "cdc", 00:23:39.932 "svcname": "_nvme-disc._tcp", 00:23:39.932 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:39.932 } 00:23:39.932 } 00:23:39.932 Got JSON-RPC error response 00:23:39.932 GoRPCClient: error on JSON-RPC call 00:23:39.932 19:44:26 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:39.932 19:44:26 -- common/autotest_common.sh@653 -- # es=1 00:23:39.932 19:44:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:39.932 19:44:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:39.932 19:44:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:39.932 19:44:26 -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:23:39.932 19:44:26 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:39.932 19:44:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.932 19:44:26 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:39.932 19:44:26 -- host/mdns_discovery.sh@76 -- # sort 00:23:39.932 19:44:26 -- common/autotest_common.sh@10 -- # set +x 00:23:39.932 19:44:26 -- host/mdns_discovery.sh@76 -- # xargs 00:23:39.933 19:44:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.933 19:44:26 -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:39.933 19:44:26 -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:23:39.933 19:44:26 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:39.933 19:44:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.933 19:44:26 -- common/autotest_common.sh@10 -- # set +x 00:23:39.933 19:44:26 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:39.933 19:44:26 -- host/mdns_discovery.sh@64 -- # sort 00:23:39.933 19:44:26 -- host/mdns_discovery.sh@64 -- # xargs 00:23:39.933 19:44:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.933 19:44:26 -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:39.933 19:44:26 -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:39.933 19:44:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.933 19:44:26 -- common/autotest_common.sh@10 -- # set +x 00:23:39.933 19:44:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.933 19:44:26 -- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT 00:23:39.933 19:44:26 -- host/mdns_discovery.sh@197 -- # kill 98253 00:23:39.933 19:44:26 -- host/mdns_discovery.sh@200 -- # wait 98253 00:23:40.191 [2024-12-15 19:44:26.887828] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:40.191 19:44:27 -- host/mdns_discovery.sh@201 -- # kill 98334 00:23:40.191 Got SIGTERM, quitting. 00:23:40.191 19:44:27 -- host/mdns_discovery.sh@202 -- # kill 98282 00:23:40.191 19:44:27 -- host/mdns_discovery.sh@203 -- # nvmftestfini 00:23:40.191 19:44:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:40.191 19:44:27 -- nvmf/common.sh@116 -- # sync 00:23:40.191 Got SIGTERM, quitting. 00:23:40.191 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:23:40.191 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:23:40.191 avahi-daemon 0.8 exiting. 00:23:40.450 19:44:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:40.450 19:44:27 -- nvmf/common.sh@119 -- # set +e 00:23:40.450 19:44:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:40.450 19:44:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:40.450 rmmod nvme_tcp 00:23:40.450 rmmod nvme_fabrics 00:23:40.450 rmmod nvme_keyring 00:23:40.450 19:44:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:40.450 19:44:27 -- nvmf/common.sh@123 -- # set -e 00:23:40.450 19:44:27 -- nvmf/common.sh@124 -- # return 0 00:23:40.450 19:44:27 -- nvmf/common.sh@477 -- # '[' -n 98202 ']' 00:23:40.450 19:44:27 -- nvmf/common.sh@478 -- # killprocess 98202 00:23:40.450 19:44:27 -- common/autotest_common.sh@936 -- # '[' -z 98202 ']' 00:23:40.450 19:44:27 -- common/autotest_common.sh@940 -- # kill -0 98202 00:23:40.450 19:44:27 -- common/autotest_common.sh@941 -- # uname 00:23:40.450 19:44:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:40.450 19:44:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98202 00:23:40.450 19:44:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:40.450 killing process with pid 98202 00:23:40.450 19:44:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:40.450 19:44:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98202' 00:23:40.450 19:44:27 -- common/autotest_common.sh@955 -- # kill 98202 00:23:40.450 19:44:27 -- common/autotest_common.sh@960 -- # wait 98202 00:23:40.708 19:44:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:40.708 19:44:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:40.708 19:44:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:40.708 19:44:27 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:40.708 19:44:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:40.708 19:44:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.708 19:44:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:40.708 19:44:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:40.708 19:44:27 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:40.708 00:23:40.708 real 0m20.952s 00:23:40.708 user 0m40.850s 00:23:40.708 sys 0m2.053s 00:23:40.708 19:44:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:40.708 19:44:27 -- common/autotest_common.sh@10 -- # set +x 00:23:40.708 ************************************ 00:23:40.708 END TEST nvmf_mdns_discovery 00:23:40.708 ************************************ 00:23:40.708 19:44:27 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:23:40.708 19:44:27 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:40.708 19:44:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:40.708 19:44:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:40.708 19:44:27 -- common/autotest_common.sh@10 -- # set +x 00:23:40.708 ************************************ 00:23:40.708 START TEST nvmf_multipath 00:23:40.708 ************************************ 00:23:40.708 19:44:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:40.968 * Looking for test storage... 00:23:40.968 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:40.968 19:44:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:40.968 19:44:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:40.968 19:44:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:40.968 19:44:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:40.968 19:44:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:40.968 19:44:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:40.968 19:44:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:40.968 19:44:27 -- scripts/common.sh@335 -- # IFS=.-: 00:23:40.968 19:44:27 -- scripts/common.sh@335 -- # read -ra ver1 00:23:40.968 19:44:27 -- scripts/common.sh@336 -- # IFS=.-: 00:23:40.968 19:44:27 -- scripts/common.sh@336 -- # read -ra ver2 00:23:40.968 19:44:27 -- scripts/common.sh@337 -- # local 'op=<' 00:23:40.968 19:44:27 -- scripts/common.sh@339 -- # ver1_l=2 00:23:40.968 19:44:27 -- scripts/common.sh@340 -- # ver2_l=1 00:23:40.968 19:44:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:40.968 19:44:27 -- scripts/common.sh@343 -- # case "$op" in 00:23:40.968 19:44:27 -- scripts/common.sh@344 -- # : 1 00:23:40.968 19:44:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:40.968 19:44:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:40.968 19:44:27 -- scripts/common.sh@364 -- # decimal 1 00:23:40.968 19:44:27 -- scripts/common.sh@352 -- # local d=1 00:23:40.968 19:44:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:40.968 19:44:27 -- scripts/common.sh@354 -- # echo 1 00:23:40.968 19:44:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:40.968 19:44:27 -- scripts/common.sh@365 -- # decimal 2 00:23:40.968 19:44:27 -- scripts/common.sh@352 -- # local d=2 00:23:40.968 19:44:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:40.968 19:44:27 -- scripts/common.sh@354 -- # echo 2 00:23:40.968 19:44:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:40.968 19:44:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:40.968 19:44:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:40.968 19:44:27 -- scripts/common.sh@367 -- # return 0 00:23:40.968 19:44:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:40.968 19:44:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:40.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.968 --rc genhtml_branch_coverage=1 00:23:40.968 --rc genhtml_function_coverage=1 00:23:40.968 --rc genhtml_legend=1 00:23:40.968 --rc geninfo_all_blocks=1 00:23:40.968 --rc geninfo_unexecuted_blocks=1 00:23:40.968 00:23:40.968 ' 00:23:40.968 19:44:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:40.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.968 --rc genhtml_branch_coverage=1 00:23:40.968 --rc genhtml_function_coverage=1 00:23:40.968 --rc genhtml_legend=1 00:23:40.968 --rc geninfo_all_blocks=1 00:23:40.968 --rc geninfo_unexecuted_blocks=1 00:23:40.968 00:23:40.968 ' 00:23:40.968 19:44:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:40.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.968 --rc genhtml_branch_coverage=1 00:23:40.968 --rc genhtml_function_coverage=1 00:23:40.968 --rc genhtml_legend=1 00:23:40.968 --rc geninfo_all_blocks=1 00:23:40.968 --rc geninfo_unexecuted_blocks=1 00:23:40.968 00:23:40.968 ' 00:23:40.968 19:44:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:40.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.968 --rc genhtml_branch_coverage=1 00:23:40.968 --rc genhtml_function_coverage=1 00:23:40.968 --rc genhtml_legend=1 00:23:40.968 --rc geninfo_all_blocks=1 00:23:40.968 --rc geninfo_unexecuted_blocks=1 00:23:40.968 00:23:40.968 ' 00:23:40.968 19:44:27 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:40.968 19:44:27 -- nvmf/common.sh@7 -- # uname -s 00:23:40.968 19:44:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:40.968 19:44:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:40.968 19:44:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:40.968 19:44:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:40.968 19:44:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:40.968 19:44:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:40.968 19:44:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:40.968 19:44:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:40.968 19:44:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:40.968 19:44:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:40.968 19:44:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:23:40.968 19:44:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:23:40.968 19:44:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:40.968 19:44:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:40.968 19:44:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:40.968 19:44:27 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:40.968 19:44:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:40.968 19:44:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:40.968 19:44:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:40.968 19:44:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.968 19:44:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.968 19:44:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.968 19:44:27 -- paths/export.sh@5 -- # export PATH 00:23:40.969 19:44:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.969 19:44:27 -- nvmf/common.sh@46 -- # : 0 00:23:40.969 19:44:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:40.969 19:44:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:40.969 19:44:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:40.969 19:44:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:40.969 19:44:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:40.969 19:44:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:40.969 19:44:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:40.969 19:44:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:40.969 19:44:27 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:40.969 19:44:27 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:40.969 19:44:27 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:40.969 19:44:27 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:40.969 19:44:27 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:40.969 19:44:27 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:40.969 19:44:27 -- host/multipath.sh@30 -- # nvmftestinit 00:23:40.969 19:44:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:40.969 19:44:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:40.969 19:44:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:40.969 19:44:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:40.969 19:44:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:40.969 19:44:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.969 19:44:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:40.969 19:44:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:40.969 19:44:27 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:40.969 19:44:27 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:40.969 19:44:27 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:40.969 19:44:27 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:40.969 19:44:27 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:40.969 19:44:27 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:40.969 19:44:27 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:40.969 19:44:27 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:40.969 19:44:27 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:40.969 19:44:27 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:40.969 19:44:27 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:40.969 19:44:27 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:40.969 19:44:27 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:40.969 19:44:27 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:40.969 19:44:27 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:40.969 19:44:27 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:40.969 19:44:27 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:40.969 19:44:27 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:40.969 19:44:27 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:40.969 19:44:27 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:40.969 Cannot find device "nvmf_tgt_br" 00:23:40.969 19:44:27 -- nvmf/common.sh@154 -- # true 00:23:40.969 19:44:27 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:40.969 Cannot find device "nvmf_tgt_br2" 00:23:40.969 19:44:27 -- nvmf/common.sh@155 -- # true 00:23:40.969 19:44:27 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:40.969 19:44:27 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:40.969 Cannot find device "nvmf_tgt_br" 00:23:40.969 19:44:27 -- nvmf/common.sh@157 -- # true 00:23:40.969 19:44:27 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:40.969 Cannot find device "nvmf_tgt_br2" 00:23:40.969 19:44:27 -- nvmf/common.sh@158 -- # true 00:23:40.969 19:44:27 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:41.227 19:44:27 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:41.227 19:44:27 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:41.227 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:41.227 19:44:27 -- nvmf/common.sh@161 -- # true 00:23:41.227 19:44:27 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:41.227 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:41.227 19:44:27 -- nvmf/common.sh@162 -- # true 00:23:41.227 19:44:27 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:41.227 19:44:27 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:41.227 19:44:27 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:41.227 19:44:27 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:41.227 19:44:27 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:41.227 19:44:27 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:41.227 19:44:27 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:41.227 19:44:27 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:41.227 19:44:27 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:41.227 19:44:27 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:41.227 19:44:27 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:41.227 19:44:28 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:41.227 19:44:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:41.227 19:44:28 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:41.227 19:44:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:41.227 19:44:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:41.228 19:44:28 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:41.228 19:44:28 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:41.228 19:44:28 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:41.228 19:44:28 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:41.228 19:44:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:41.228 19:44:28 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:41.228 19:44:28 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:41.228 19:44:28 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:41.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:41.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:23:41.228 00:23:41.228 --- 10.0.0.2 ping statistics --- 00:23:41.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.228 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:23:41.228 19:44:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:41.228 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:41.228 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:23:41.228 00:23:41.228 --- 10.0.0.3 ping statistics --- 00:23:41.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.228 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:23:41.228 19:44:28 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:41.228 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:41.228 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:23:41.228 00:23:41.228 --- 10.0.0.1 ping statistics --- 00:23:41.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.228 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:23:41.228 19:44:28 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:41.228 19:44:28 -- nvmf/common.sh@421 -- # return 0 00:23:41.228 19:44:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:41.228 19:44:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:41.228 19:44:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:41.228 19:44:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:41.228 19:44:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:41.228 19:44:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:41.228 19:44:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:41.492 19:44:28 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:23:41.492 19:44:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:41.492 19:44:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:41.492 19:44:28 -- common/autotest_common.sh@10 -- # set +x 00:23:41.492 19:44:28 -- nvmf/common.sh@469 -- # nvmfpid=98857 00:23:41.492 19:44:28 -- nvmf/common.sh@470 -- # waitforlisten 98857 00:23:41.492 19:44:28 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:41.492 19:44:28 -- common/autotest_common.sh@829 -- # '[' -z 98857 ']' 00:23:41.492 19:44:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:41.492 19:44:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:41.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:41.492 19:44:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:41.492 19:44:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:41.492 19:44:28 -- common/autotest_common.sh@10 -- # set +x 00:23:41.492 [2024-12-15 19:44:28.187500] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:41.492 [2024-12-15 19:44:28.187591] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:41.492 [2024-12-15 19:44:28.328316] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:41.761 [2024-12-15 19:44:28.439431] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:41.761 [2024-12-15 19:44:28.440002] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:41.761 [2024-12-15 19:44:28.440160] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:41.761 [2024-12-15 19:44:28.440183] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:41.761 [2024-12-15 19:44:28.440337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:41.761 [2024-12-15 19:44:28.440396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:42.327 19:44:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:42.327 19:44:29 -- common/autotest_common.sh@862 -- # return 0 00:23:42.327 19:44:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:42.327 19:44:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:42.327 19:44:29 -- common/autotest_common.sh@10 -- # set +x 00:23:42.327 19:44:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:42.327 19:44:29 -- host/multipath.sh@33 -- # nvmfapp_pid=98857 00:23:42.327 19:44:29 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:42.895 [2024-12-15 19:44:29.497419] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:42.895 19:44:29 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:43.154 Malloc0 00:23:43.154 19:44:29 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:43.413 19:44:30 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:43.671 19:44:30 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:43.930 [2024-12-15 19:44:30.661628] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:43.930 19:44:30 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:44.188 [2024-12-15 19:44:30.961762] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:44.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:44.188 19:44:30 -- host/multipath.sh@44 -- # bdevperf_pid=98961 00:23:44.188 19:44:30 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:44.188 19:44:30 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:44.188 19:44:30 -- host/multipath.sh@47 -- # waitforlisten 98961 /var/tmp/bdevperf.sock 00:23:44.188 19:44:30 -- common/autotest_common.sh@829 -- # '[' -z 98961 ']' 00:23:44.188 19:44:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:44.189 19:44:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:44.189 19:44:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:44.189 19:44:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:44.189 19:44:30 -- common/autotest_common.sh@10 -- # set +x 00:23:45.565 19:44:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:45.565 19:44:32 -- common/autotest_common.sh@862 -- # return 0 00:23:45.565 19:44:32 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:45.565 19:44:32 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:46.133 Nvme0n1 00:23:46.133 19:44:32 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:46.391 Nvme0n1 00:23:46.391 19:44:33 -- host/multipath.sh@78 -- # sleep 1 00:23:46.391 19:44:33 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:47.327 19:44:34 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:23:47.327 19:44:34 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:47.586 19:44:34 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:47.845 19:44:34 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:23:47.845 19:44:34 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98857 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:47.845 19:44:34 -- host/multipath.sh@65 -- # dtrace_pid=99054 00:23:47.845 19:44:34 -- host/multipath.sh@66 -- # sleep 6 00:23:54.407 19:44:40 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:54.407 19:44:40 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:54.407 19:44:41 -- host/multipath.sh@67 -- # active_port=4421 00:23:54.407 19:44:41 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:54.407 Attaching 4 probes... 00:23:54.407 @path[10.0.0.2, 4421]: 18672 00:23:54.407 @path[10.0.0.2, 4421]: 19538 00:23:54.407 @path[10.0.0.2, 4421]: 20561 00:23:54.407 @path[10.0.0.2, 4421]: 19983 00:23:54.407 @path[10.0.0.2, 4421]: 18556 00:23:54.407 19:44:41 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:54.407 19:44:41 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:54.407 19:44:41 -- host/multipath.sh@69 -- # sed -n 1p 00:23:54.407 19:44:41 -- host/multipath.sh@69 -- # port=4421 00:23:54.407 19:44:41 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:54.407 19:44:41 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:54.407 19:44:41 -- host/multipath.sh@72 -- # kill 99054 00:23:54.407 19:44:41 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:54.407 19:44:41 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:23:54.408 19:44:41 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:54.408 19:44:41 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:54.975 19:44:41 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:23:54.975 19:44:41 -- host/multipath.sh@65 -- # dtrace_pid=99187 00:23:54.975 19:44:41 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98857 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:54.975 19:44:41 -- host/multipath.sh@66 -- # sleep 6 00:24:01.540 19:44:47 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:01.540 19:44:47 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:24:01.540 19:44:47 -- host/multipath.sh@67 -- # active_port=4420 00:24:01.540 19:44:47 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:01.540 Attaching 4 probes... 00:24:01.540 @path[10.0.0.2, 4420]: 18889 00:24:01.540 @path[10.0.0.2, 4420]: 19201 00:24:01.541 @path[10.0.0.2, 4420]: 19974 00:24:01.541 @path[10.0.0.2, 4420]: 20608 00:24:01.541 @path[10.0.0.2, 4420]: 19762 00:24:01.541 19:44:47 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:01.541 19:44:47 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:01.541 19:44:47 -- host/multipath.sh@69 -- # sed -n 1p 00:24:01.541 19:44:47 -- host/multipath.sh@69 -- # port=4420 00:24:01.541 19:44:47 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:24:01.541 19:44:47 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:24:01.541 19:44:47 -- host/multipath.sh@72 -- # kill 99187 00:24:01.541 19:44:47 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:01.541 19:44:47 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:24:01.541 19:44:47 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:01.541 19:44:48 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:01.541 19:44:48 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:24:01.541 19:44:48 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98857 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:01.541 19:44:48 -- host/multipath.sh@65 -- # dtrace_pid=99319 00:24:01.541 19:44:48 -- host/multipath.sh@66 -- # sleep 6 00:24:08.105 19:44:54 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:08.105 19:44:54 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:08.105 19:44:54 -- host/multipath.sh@67 -- # active_port=4421 00:24:08.105 19:44:54 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:08.105 Attaching 4 probes... 00:24:08.105 @path[10.0.0.2, 4421]: 13699 00:24:08.105 @path[10.0.0.2, 4421]: 19651 00:24:08.105 @path[10.0.0.2, 4421]: 20430 00:24:08.105 @path[10.0.0.2, 4421]: 19543 00:24:08.105 @path[10.0.0.2, 4421]: 19208 00:24:08.105 19:44:54 -- host/multipath.sh@69 -- # sed -n 1p 00:24:08.105 19:44:54 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:08.105 19:44:54 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:08.105 19:44:54 -- host/multipath.sh@69 -- # port=4421 00:24:08.105 19:44:54 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:08.105 19:44:54 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:08.105 19:44:54 -- host/multipath.sh@72 -- # kill 99319 00:24:08.105 19:44:54 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:08.105 19:44:54 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:24:08.105 19:44:54 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:08.364 19:44:55 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:08.623 19:44:55 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:24:08.623 19:44:55 -- host/multipath.sh@65 -- # dtrace_pid=99455 00:24:08.623 19:44:55 -- host/multipath.sh@66 -- # sleep 6 00:24:08.623 19:44:55 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98857 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:15.221 19:45:01 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:15.221 19:45:01 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:24:15.221 19:45:01 -- host/multipath.sh@67 -- # active_port= 00:24:15.221 19:45:01 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:15.221 Attaching 4 probes... 00:24:15.221 00:24:15.221 00:24:15.221 00:24:15.221 00:24:15.221 00:24:15.221 19:45:01 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:15.221 19:45:01 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:15.221 19:45:01 -- host/multipath.sh@69 -- # sed -n 1p 00:24:15.221 19:45:01 -- host/multipath.sh@69 -- # port= 00:24:15.221 19:45:01 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:24:15.221 19:45:01 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:24:15.221 19:45:01 -- host/multipath.sh@72 -- # kill 99455 00:24:15.221 19:45:01 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:15.221 19:45:01 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:24:15.221 19:45:01 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:15.221 19:45:01 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:15.480 19:45:02 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:24:15.480 19:45:02 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98857 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:15.480 19:45:02 -- host/multipath.sh@65 -- # dtrace_pid=99580 00:24:15.480 19:45:02 -- host/multipath.sh@66 -- # sleep 6 00:24:22.048 19:45:08 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:22.048 19:45:08 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:22.048 19:45:08 -- host/multipath.sh@67 -- # active_port=4421 00:24:22.048 19:45:08 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:22.048 Attaching 4 probes... 00:24:22.048 @path[10.0.0.2, 4421]: 20430 00:24:22.048 @path[10.0.0.2, 4421]: 21449 00:24:22.048 @path[10.0.0.2, 4421]: 21847 00:24:22.048 @path[10.0.0.2, 4421]: 21791 00:24:22.048 @path[10.0.0.2, 4421]: 21908 00:24:22.048 19:45:08 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:22.048 19:45:08 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:22.048 19:45:08 -- host/multipath.sh@69 -- # sed -n 1p 00:24:22.048 19:45:08 -- host/multipath.sh@69 -- # port=4421 00:24:22.048 19:45:08 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:22.048 19:45:08 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:22.048 19:45:08 -- host/multipath.sh@72 -- # kill 99580 00:24:22.048 19:45:08 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:22.048 19:45:08 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:22.048 [2024-12-15 19:45:08.726434] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.726777] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.726793] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.726802] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.726810] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.726850] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.726860] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.726868] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.726878] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.726886] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.726893] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.726901] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.726909] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.726917] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.726925] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.726933] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.726941] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.726949] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.726958] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.726966] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.726974] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.726982] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.726989] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.726997] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.727006] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.727014] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.727022] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.727030] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.727040] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.727048] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.727056] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.727064] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.727073] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.727081] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.727089] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.727097] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.727105] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.727127] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.727135] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.727158] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.727175] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.727183] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.727195] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.727202] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.727210] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.727219] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.727226] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.727243] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.727250] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.727258] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.727266] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.727273] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.727281] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.727288] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.727295] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.727303] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.727326] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 [2024-12-15 19:45:08.727333] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafe70 is same with the state(5) to be set 00:24:22.048 19:45:08 -- host/multipath.sh@101 -- # sleep 1 00:24:22.985 19:45:09 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:24:22.985 19:45:09 -- host/multipath.sh@65 -- # dtrace_pid=99716 00:24:22.985 19:45:09 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98857 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:22.985 19:45:09 -- host/multipath.sh@66 -- # sleep 6 00:24:29.549 19:45:15 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:29.549 19:45:15 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:24:29.549 19:45:16 -- host/multipath.sh@67 -- # active_port=4420 00:24:29.549 19:45:16 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:29.549 Attaching 4 probes... 00:24:29.549 @path[10.0.0.2, 4420]: 21500 00:24:29.549 @path[10.0.0.2, 4420]: 21865 00:24:29.549 @path[10.0.0.2, 4420]: 21984 00:24:29.549 @path[10.0.0.2, 4420]: 22019 00:24:29.549 @path[10.0.0.2, 4420]: 21902 00:24:29.549 19:45:16 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:29.549 19:45:16 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:29.549 19:45:16 -- host/multipath.sh@69 -- # sed -n 1p 00:24:29.549 19:45:16 -- host/multipath.sh@69 -- # port=4420 00:24:29.549 19:45:16 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:24:29.549 19:45:16 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:24:29.549 19:45:16 -- host/multipath.sh@72 -- # kill 99716 00:24:29.549 19:45:16 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:29.549 19:45:16 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:29.549 [2024-12-15 19:45:16.345697] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:29.549 19:45:16 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:29.808 19:45:16 -- host/multipath.sh@111 -- # sleep 6 00:24:36.375 19:45:22 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:24:36.375 19:45:22 -- host/multipath.sh@65 -- # dtrace_pid=99908 00:24:36.375 19:45:22 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98857 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:36.375 19:45:22 -- host/multipath.sh@66 -- # sleep 6 00:24:42.967 19:45:28 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:42.967 19:45:28 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:42.967 19:45:28 -- host/multipath.sh@67 -- # active_port=4421 00:24:42.967 19:45:28 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:42.967 Attaching 4 probes... 00:24:42.967 @path[10.0.0.2, 4421]: 20917 00:24:42.967 @path[10.0.0.2, 4421]: 21278 00:24:42.967 @path[10.0.0.2, 4421]: 21285 00:24:42.967 @path[10.0.0.2, 4421]: 21264 00:24:42.967 @path[10.0.0.2, 4421]: 21343 00:24:42.967 19:45:28 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:42.967 19:45:28 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:42.967 19:45:28 -- host/multipath.sh@69 -- # sed -n 1p 00:24:42.967 19:45:28 -- host/multipath.sh@69 -- # port=4421 00:24:42.967 19:45:28 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:42.967 19:45:28 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:42.967 19:45:28 -- host/multipath.sh@72 -- # kill 99908 00:24:42.967 19:45:28 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:42.967 19:45:28 -- host/multipath.sh@114 -- # killprocess 98961 00:24:42.967 19:45:28 -- common/autotest_common.sh@936 -- # '[' -z 98961 ']' 00:24:42.967 19:45:28 -- common/autotest_common.sh@940 -- # kill -0 98961 00:24:42.967 19:45:28 -- common/autotest_common.sh@941 -- # uname 00:24:42.967 19:45:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:42.967 19:45:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98961 00:24:42.967 killing process with pid 98961 00:24:42.967 19:45:28 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:42.967 19:45:29 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:42.967 19:45:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98961' 00:24:42.967 19:45:29 -- common/autotest_common.sh@955 -- # kill 98961 00:24:42.967 19:45:29 -- common/autotest_common.sh@960 -- # wait 98961 00:24:42.967 Connection closed with partial response: 00:24:42.967 00:24:42.967 00:24:42.967 19:45:29 -- host/multipath.sh@116 -- # wait 98961 00:24:42.967 19:45:29 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:42.967 [2024-12-15 19:44:31.032569] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:42.967 [2024-12-15 19:44:31.032718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98961 ] 00:24:42.967 [2024-12-15 19:44:31.170168] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.967 [2024-12-15 19:44:31.265312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:42.967 Running I/O for 90 seconds... 00:24:42.967 [2024-12-15 19:44:41.550881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.967 [2024-12-15 19:44:41.550995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:42.967 [2024-12-15 19:44:41.551070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:74488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.967 [2024-12-15 19:44:41.551093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:42.967 [2024-12-15 19:44:41.551116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:73856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.967 [2024-12-15 19:44:41.551132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:42.967 [2024-12-15 19:44:41.551170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:73864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.967 [2024-12-15 19:44:41.551200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:42.967 [2024-12-15 19:44:41.551867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:73880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.967 [2024-12-15 19:44:41.551899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:42.967 [2024-12-15 19:44:41.551928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:73896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.967 [2024-12-15 19:44:41.551946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:42.967 [2024-12-15 19:44:41.551969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.967 [2024-12-15 19:44:41.551985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:42.967 [2024-12-15 19:44:41.552021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.967 [2024-12-15 19:44:41.552037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:42.967 [2024-12-15 19:44:41.552059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:73928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.967 [2024-12-15 19:44:41.552075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:42.967 [2024-12-15 19:44:41.552097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:73936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.967 [2024-12-15 19:44:41.552112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:42.967 [2024-12-15 19:44:41.552133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:73952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.968 [2024-12-15 19:44:41.552186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:42.968 [2024-12-15 19:44:41.552226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:73960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.968 [2024-12-15 19:44:41.552256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:42.968 [2024-12-15 19:44:41.552291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:73968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.968 [2024-12-15 19:44:41.552305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:42.968 [2024-12-15 19:44:41.552325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:73992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.968 [2024-12-15 19:44:41.552355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:42.968 [2024-12-15 19:44:41.552375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:74000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.968 [2024-12-15 19:44:41.552389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.968 [2024-12-15 19:44:41.552410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:74016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.968 [2024-12-15 19:44:41.552424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:42.968 [2024-12-15 19:44:41.552444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:74032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.968 [2024-12-15 19:44:41.552459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.968 [2024-12-15 19:44:41.552480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:74040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.968 [2024-12-15 19:44:41.552494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:42.968 [2024-12-15 19:44:41.552514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:74496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.968 [2024-12-15 19:44:41.552529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:42.968 [2024-12-15 19:44:41.552549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:74504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.968 [2024-12-15 19:44:41.552563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:42.968 [2024-12-15 19:44:41.552584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:74512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.968 [2024-12-15 19:44:41.552598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:42.968 [2024-12-15 19:44:41.552618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:74520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.968 [2024-12-15 19:44:41.552632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:42.968 [2024-12-15 19:44:41.552653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:74528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.968 [2024-12-15 19:44:41.552667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:42.968 [2024-12-15 19:44:41.552713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.968 [2024-12-15 19:44:41.552743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:42.968 [2024-12-15 19:44:41.552763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.968 [2024-12-15 19:44:41.552777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:42.968 [2024-12-15 19:44:41.552796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:74552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.968 [2024-12-15 19:44:41.552825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:42.968 [2024-12-15 19:44:41.552862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.968 [2024-12-15 19:44:41.552877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:42.968 [2024-12-15 19:44:41.552898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.968 [2024-12-15 19:44:41.552928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:42.968 [2024-12-15 19:44:41.552950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.968 [2024-12-15 19:44:41.552965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:42.968 [2024-12-15 19:44:41.553005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.968 [2024-12-15 19:44:41.553024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:42.968 [2024-12-15 19:44:41.553048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.968 [2024-12-15 19:44:41.553065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:42.968 [2024-12-15 19:44:41.553086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:74600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.968 [2024-12-15 19:44:41.553102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:42.968 [2024-12-15 19:44:41.553123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:74608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.968 [2024-12-15 19:44:41.553139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:42.968 [2024-12-15 19:44:41.553161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:74616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.968 [2024-12-15 19:44:41.553183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:42.968 [2024-12-15 19:44:41.553205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.968 [2024-12-15 19:44:41.553221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:42.968 [2024-12-15 19:44:41.553268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.968 [2024-12-15 19:44:41.553299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:42.968 [2024-12-15 19:44:41.553349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:74640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.968 [2024-12-15 19:44:41.553363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:42.968 [2024-12-15 19:44:41.553382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:74648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.968 [2024-12-15 19:44:41.553396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:42.968 [2024-12-15 19:44:41.553415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.968 [2024-12-15 19:44:41.553429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:42.968 [2024-12-15 19:44:41.553448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.968 [2024-12-15 19:44:41.553462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:42.968 [2024-12-15 19:44:41.553481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:74672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.968 [2024-12-15 19:44:41.553495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:42.968 [2024-12-15 19:44:41.553531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:74680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.968 [2024-12-15 19:44:41.553546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:42.968 [2024-12-15 19:44:41.553566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.968 [2024-12-15 19:44:41.553606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:42.968 [2024-12-15 19:44:41.553626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.968 [2024-12-15 19:44:41.553641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:42.968 [2024-12-15 19:44:41.553661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:74704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.968 [2024-12-15 19:44:41.553675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:42.968 [2024-12-15 19:44:41.553696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.968 [2024-12-15 19:44:41.553711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:42.968 [2024-12-15 19:44:41.553731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:74720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.968 [2024-12-15 19:44:41.553747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:42.968 [2024-12-15 19:44:41.553767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:74728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.968 [2024-12-15 19:44:41.553789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:42.968 [2024-12-15 19:44:41.553813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.968 [2024-12-15 19:44:41.553828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.968 [2024-12-15 19:44:41.553865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.968 [2024-12-15 19:44:41.553879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:42.969 [2024-12-15 19:44:41.553900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:74752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.969 [2024-12-15 19:44:41.553915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:42.969 [2024-12-15 19:44:41.553980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.969 [2024-12-15 19:44:41.553995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:42.969 [2024-12-15 19:44:41.554015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:74064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.969 [2024-12-15 19:44:41.554029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:42.969 [2024-12-15 19:44:41.554049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:74136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.969 [2024-12-15 19:44:41.554063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:42.969 [2024-12-15 19:44:41.554082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.969 [2024-12-15 19:44:41.554096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:42.969 [2024-12-15 19:44:41.554131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.969 [2024-12-15 19:44:41.554146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:42.969 [2024-12-15 19:44:41.554165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.969 [2024-12-15 19:44:41.554179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:42.969 [2024-12-15 19:44:41.554199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:74232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.969 [2024-12-15 19:44:41.554213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:42.969 [2024-12-15 19:44:41.554233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:74240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.969 [2024-12-15 19:44:41.554247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:42.969 [2024-12-15 19:44:41.554268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:74264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.969 [2024-12-15 19:44:41.554290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:42.969 [2024-12-15 19:44:41.554312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:74296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.969 [2024-12-15 19:44:41.554327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:42.969 [2024-12-15 19:44:41.554390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.969 [2024-12-15 19:44:41.554407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:42.969 [2024-12-15 19:44:41.554429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:74328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.969 [2024-12-15 19:44:41.554445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:42.969 [2024-12-15 19:44:41.554466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:74336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.969 [2024-12-15 19:44:41.554481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:42.969 [2024-12-15 19:44:41.554502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:74344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.969 [2024-12-15 19:44:41.554517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:42.969 [2024-12-15 19:44:41.554544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:74368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.969 [2024-12-15 19:44:41.554559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:42.969 [2024-12-15 19:44:41.554580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:74384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.969 [2024-12-15 19:44:41.554595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:42.969 [2024-12-15 19:44:41.554616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:74400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.969 [2024-12-15 19:44:41.554630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:42.969 [2024-12-15 19:44:41.554653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.969 [2024-12-15 19:44:41.554672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:42.969 [2024-12-15 19:44:41.554712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:74776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.969 [2024-12-15 19:44:41.554726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:42.969 [2024-12-15 19:44:41.554746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:74784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.969 [2024-12-15 19:44:41.554762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:42.969 [2024-12-15 19:44:41.554782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.969 [2024-12-15 19:44:41.554797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:42.969 [2024-12-15 19:44:41.554825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.969 [2024-12-15 19:44:41.554841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:42.969 [2024-12-15 19:44:41.554891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:74808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.969 [2024-12-15 19:44:41.554907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:42.969 [2024-12-15 19:44:41.554929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.969 [2024-12-15 19:44:41.554944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:42.969 [2024-12-15 19:44:41.554965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:74824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.969 [2024-12-15 19:44:41.554979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:42.969 [2024-12-15 19:44:41.555001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:74832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.969 [2024-12-15 19:44:41.555031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:42.969 [2024-12-15 19:44:41.555065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:74840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.969 [2024-12-15 19:44:41.555079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:42.969 [2024-12-15 19:44:41.555099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.969 [2024-12-15 19:44:41.555113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:42.969 [2024-12-15 19:44:41.555133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.969 [2024-12-15 19:44:41.555162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:42.969 [2024-12-15 19:44:41.555183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:74864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.969 [2024-12-15 19:44:41.555198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.969 [2024-12-15 19:44:41.556172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:74872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.969 [2024-12-15 19:44:41.556202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:42.969 [2024-12-15 19:44:41.556245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:74880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.969 [2024-12-15 19:44:41.556262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:42.969 [2024-12-15 19:44:41.556282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:74888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.969 [2024-12-15 19:44:41.556297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:42.969 [2024-12-15 19:44:41.556350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:74896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.969 [2024-12-15 19:44:41.556381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:42.969 [2024-12-15 19:44:41.556402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:74904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.969 [2024-12-15 19:44:41.556417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:42.969 [2024-12-15 19:44:41.556437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.969 [2024-12-15 19:44:41.556467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:42.970 [2024-12-15 19:44:41.556488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.970 [2024-12-15 19:44:41.556503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:42.970 [2024-12-15 19:44:41.556531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:74928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.970 [2024-12-15 19:44:41.556547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:42.970 [2024-12-15 19:44:41.556583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:74936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.970 [2024-12-15 19:44:41.556598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:42.970 [2024-12-15 19:44:41.556618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.970 [2024-12-15 19:44:41.556648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:42.970 [2024-12-15 19:44:41.556685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:74952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.970 [2024-12-15 19:44:41.556699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:42.970 [2024-12-15 19:44:41.556736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:74960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.970 [2024-12-15 19:44:41.556751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:42.970 [2024-12-15 19:44:41.556772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:74968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.970 [2024-12-15 19:44:41.556786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:42.970 [2024-12-15 19:44:41.556822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.970 [2024-12-15 19:44:41.556853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:42.970 [2024-12-15 19:44:41.556890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.970 [2024-12-15 19:44:41.556905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:42.970 [2024-12-15 19:44:41.556928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:74992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.970 [2024-12-15 19:44:41.556951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:42.970 [2024-12-15 19:44:41.556975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.970 [2024-12-15 19:44:41.556991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:42.970 [2024-12-15 19:44:41.557030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:75008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.970 [2024-12-15 19:44:41.557047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:42.970 [2024-12-15 19:44:41.557069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:75016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.970 [2024-12-15 19:44:41.557085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:42.970 [2024-12-15 19:44:41.557106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:75024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.970 [2024-12-15 19:44:41.557123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:42.970 [2024-12-15 19:44:41.557144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:75032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.970 [2024-12-15 19:44:41.557159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:42.970 [2024-12-15 19:44:41.557190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:75040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.970 [2024-12-15 19:44:41.557205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:42.970 [2024-12-15 19:44:41.557226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:75048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.970 [2024-12-15 19:44:41.557241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:42.970 [2024-12-15 19:44:41.557273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:75056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.970 [2024-12-15 19:44:41.557290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:42.970 [2024-12-15 19:44:41.557312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:75064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.970 [2024-12-15 19:44:41.557328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:42.970 [2024-12-15 19:44:41.557350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:75072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.970 [2024-12-15 19:44:41.557380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:42.970 [2024-12-15 19:44:41.557416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.970 [2024-12-15 19:44:41.557445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:42.970 [2024-12-15 19:44:41.557466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:75088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.970 [2024-12-15 19:44:41.557503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:42.970 [2024-12-15 19:44:41.557524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:75096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.970 [2024-12-15 19:44:41.557554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:42.970 [2024-12-15 19:44:41.557581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.970 [2024-12-15 19:44:41.557596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:42.970 [2024-12-15 19:44:41.557616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:75112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.970 [2024-12-15 19:44:41.557661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:42.970 [2024-12-15 19:44:41.557682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:75120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.970 [2024-12-15 19:44:41.557697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.970 [2024-12-15 19:44:41.557718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.970 [2024-12-15 19:44:41.557733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:42.970 [2024-12-15 19:44:41.557754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.970 [2024-12-15 19:44:41.557769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:42.970 [2024-12-15 19:44:41.557789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:75144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.970 [2024-12-15 19:44:41.557804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:42.970 [2024-12-15 19:44:41.557841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:75152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.970 [2024-12-15 19:44:41.557856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:42.970 [2024-12-15 19:44:41.557878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:75160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.970 [2024-12-15 19:44:41.557893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:42.970 [2024-12-15 19:44:41.557914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:75168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.970 [2024-12-15 19:44:41.557930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:42.970 [2024-12-15 19:44:41.557964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:75176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.970 [2024-12-15 19:44:41.557980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:42.970 [2024-12-15 19:44:41.558018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:75184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.970 [2024-12-15 19:44:41.558033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:42.970 [2024-12-15 19:44:41.558078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:75192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.970 [2024-12-15 19:44:41.558094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:42.970 [2024-12-15 19:44:41.558114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:75200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.970 [2024-12-15 19:44:41.558129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:42.970 [2024-12-15 19:44:41.558150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.970 [2024-12-15 19:44:41.558164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:42.970 [2024-12-15 19:44:41.558215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.970 [2024-12-15 19:44:41.558230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:42.970 [2024-12-15 19:44:41.558251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:75224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.970 [2024-12-15 19:44:41.558265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:42.970 [2024-12-15 19:44:41.558286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:75232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.970 [2024-12-15 19:44:41.558301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:42.971 [2024-12-15 19:44:41.558322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:75240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.971 [2024-12-15 19:44:41.558336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:42.971 [2024-12-15 19:44:48.135839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.971 [2024-12-15 19:44:48.135941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:42.971 [2024-12-15 19:44:48.135988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:113264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.971 [2024-12-15 19:44:48.136015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:42.971 [2024-12-15 19:44:48.136037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.971 [2024-12-15 19:44:48.136054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:42.971 [2024-12-15 19:44:48.136075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.971 [2024-12-15 19:44:48.136091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:42.971 [2024-12-15 19:44:48.136112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:113288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.971 [2024-12-15 19:44:48.136137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:42.971 [2024-12-15 19:44:48.136199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:113296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.971 [2024-12-15 19:44:48.136221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:42.971 [2024-12-15 19:44:48.136242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.971 [2024-12-15 19:44:48.136257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:42.971 [2024-12-15 19:44:48.136278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:113312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.971 [2024-12-15 19:44:48.136293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:42.971 [2024-12-15 19:44:48.136324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:113320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.971 [2024-12-15 19:44:48.136347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:42.971 [2024-12-15 19:44:48.136368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:113328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.971 [2024-12-15 19:44:48.136382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:42.971 [2024-12-15 19:44:48.136403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.971 [2024-12-15 19:44:48.136417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:42.971 [2024-12-15 19:44:48.136438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:113344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.971 [2024-12-15 19:44:48.136453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:42.971 [2024-12-15 19:44:48.136488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.971 [2024-12-15 19:44:48.136514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.971 [2024-12-15 19:44:48.136560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.971 [2024-12-15 19:44:48.136588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:42.971 [2024-12-15 19:44:48.136612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.971 [2024-12-15 19:44:48.136638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:42.971 [2024-12-15 19:44:48.136660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:113376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.971 [2024-12-15 19:44:48.136674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:42.971 [2024-12-15 19:44:48.136695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.971 [2024-12-15 19:44:48.136709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:42.971 [2024-12-15 19:44:48.136739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.971 [2024-12-15 19:44:48.136756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:42.971 [2024-12-15 19:44:48.136778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.971 [2024-12-15 19:44:48.136793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:42.971 [2024-12-15 19:44:48.136828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.971 [2024-12-15 19:44:48.136859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:42.971 [2024-12-15 19:44:48.136880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.971 [2024-12-15 19:44:48.136896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:42.971 [2024-12-15 19:44:48.136930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:113424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.971 [2024-12-15 19:44:48.136950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:42.971 [2024-12-15 19:44:48.136972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:113432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.971 [2024-12-15 19:44:48.136988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:42.971 [2024-12-15 19:44:48.137010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:113440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.971 [2024-12-15 19:44:48.137025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:42.971 [2024-12-15 19:44:48.137046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.971 [2024-12-15 19:44:48.137061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:42.971 [2024-12-15 19:44:48.137082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:113456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.971 [2024-12-15 19:44:48.137097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:42.971 [2024-12-15 19:44:48.137118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:113464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.971 [2024-12-15 19:44:48.137133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:42.971 [2024-12-15 19:44:48.138069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:113472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.971 [2024-12-15 19:44:48.138101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:42.971 [2024-12-15 19:44:48.138128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:113480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.971 [2024-12-15 19:44:48.138146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:42.971 [2024-12-15 19:44:48.138168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.971 [2024-12-15 19:44:48.138198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:42.971 [2024-12-15 19:44:48.138236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.971 [2024-12-15 19:44:48.138265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:42.972 [2024-12-15 19:44:48.138286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:112672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.972 [2024-12-15 19:44:48.138300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:42.972 [2024-12-15 19:44:48.138320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.972 [2024-12-15 19:44:48.138334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:42.972 [2024-12-15 19:44:48.138354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:112704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.972 [2024-12-15 19:44:48.138401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:42.972 [2024-12-15 19:44:48.138424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.972 [2024-12-15 19:44:48.138439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:42.972 [2024-12-15 19:44:48.138460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.972 [2024-12-15 19:44:48.138475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:42.972 [2024-12-15 19:44:48.138497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:112768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.972 [2024-12-15 19:44:48.138512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:42.972 [2024-12-15 19:44:48.138535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:113488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.972 [2024-12-15 19:44:48.138550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:42.972 [2024-12-15 19:44:48.138571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:113496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.972 [2024-12-15 19:44:48.138586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:42.972 [2024-12-15 19:44:48.138608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:113504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.972 [2024-12-15 19:44:48.138623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:42.972 [2024-12-15 19:44:48.138644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.972 [2024-12-15 19:44:48.138690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:42.972 [2024-12-15 19:44:48.138732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.972 [2024-12-15 19:44:48.138753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:42.972 [2024-12-15 19:44:48.138774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:113528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.972 [2024-12-15 19:44:48.138788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:42.972 [2024-12-15 19:44:48.138808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.972 [2024-12-15 19:44:48.138832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:42.972 [2024-12-15 19:44:48.138882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:113544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.972 [2024-12-15 19:44:48.138898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.972 [2024-12-15 19:44:48.138918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:112816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.972 [2024-12-15 19:44:48.138957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:42.972 [2024-12-15 19:44:48.138981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.972 [2024-12-15 19:44:48.138997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:42.972 [2024-12-15 19:44:48.139017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:112832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.972 [2024-12-15 19:44:48.139032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:42.972 [2024-12-15 19:44:48.139052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.972 [2024-12-15 19:44:48.139067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:42.972 [2024-12-15 19:44:48.139087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.972 [2024-12-15 19:44:48.139102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:42.972 [2024-12-15 19:44:48.139123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:112864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.972 [2024-12-15 19:44:48.139138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:42.972 [2024-12-15 19:44:48.139158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:112880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.972 [2024-12-15 19:44:48.139187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:42.972 [2024-12-15 19:44:48.139239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:112904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.972 [2024-12-15 19:44:48.139273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:42.972 [2024-12-15 19:44:48.139297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.972 [2024-12-15 19:44:48.139311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:42.972 [2024-12-15 19:44:48.139343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.972 [2024-12-15 19:44:48.139359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:42.972 [2024-12-15 19:44:48.139378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.972 [2024-12-15 19:44:48.139392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:42.972 [2024-12-15 19:44:48.139412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:112936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.972 [2024-12-15 19:44:48.139425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:42.972 [2024-12-15 19:44:48.139444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.972 [2024-12-15 19:44:48.139458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:42.972 [2024-12-15 19:44:48.139478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.972 [2024-12-15 19:44:48.139491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:42.972 [2024-12-15 19:44:48.139540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.972 [2024-12-15 19:44:48.139553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:42.972 [2024-12-15 19:44:48.139586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:113000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.972 [2024-12-15 19:44:48.139599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:42.972 [2024-12-15 19:44:48.139616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:113552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.972 [2024-12-15 19:44:48.139629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:42.972 [2024-12-15 19:44:48.139647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:113560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.972 [2024-12-15 19:44:48.139659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:42.972 [2024-12-15 19:44:48.139677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:113568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.972 [2024-12-15 19:44:48.139690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:42.972 [2024-12-15 19:44:48.139708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.972 [2024-12-15 19:44:48.139721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:42.972 [2024-12-15 19:44:48.139738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:113016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.972 [2024-12-15 19:44:48.139750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:42.972 [2024-12-15 19:44:48.139776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:113024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.972 [2024-12-15 19:44:48.139789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:42.972 [2024-12-15 19:44:48.139807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.972 [2024-12-15 19:44:48.139835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:42.973 [2024-12-15 19:44:48.139871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.973 [2024-12-15 19:44:48.139885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:42.973 [2024-12-15 19:44:48.139918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:113048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.973 [2024-12-15 19:44:48.139934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:42.973 [2024-12-15 19:44:48.139954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.973 [2024-12-15 19:44:48.139968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:42.973 [2024-12-15 19:44:48.139987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:113064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.973 [2024-12-15 19:44:48.140001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:42.973 [2024-12-15 19:44:48.140021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:113088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.973 [2024-12-15 19:44:48.140035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:42.973 [2024-12-15 19:44:48.140054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.973 [2024-12-15 19:44:48.140069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:42.973 [2024-12-15 19:44:48.140089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:113592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.973 [2024-12-15 19:44:48.140102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:42.973 [2024-12-15 19:44:48.140122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.973 [2024-12-15 19:44:48.140137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:42.973 [2024-12-15 19:44:48.140157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:113608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.973 [2024-12-15 19:44:48.140171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.973 [2024-12-15 19:44:48.140222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:113616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.973 [2024-12-15 19:44:48.140251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:42.973 [2024-12-15 19:44:48.140274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:113624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.973 [2024-12-15 19:44:48.140293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:42.973 [2024-12-15 19:44:48.140312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:113632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.973 [2024-12-15 19:44:48.140327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:42.973 [2024-12-15 19:44:48.140345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:113640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.973 [2024-12-15 19:44:48.140359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:42.973 [2024-12-15 19:44:48.140379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:113648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.973 [2024-12-15 19:44:48.140392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:42.973 [2024-12-15 19:44:48.140411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:113656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.973 [2024-12-15 19:44:48.140425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:42.973 [2024-12-15 19:44:48.140444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:113664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.973 [2024-12-15 19:44:48.140457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:42.973 [2024-12-15 19:44:48.140477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:113672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.973 [2024-12-15 19:44:48.140490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:42.973 [2024-12-15 19:44:48.140509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:113680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.973 [2024-12-15 19:44:48.140522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:42.973 [2024-12-15 19:44:48.141331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:113688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.973 [2024-12-15 19:44:48.141359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:42.973 [2024-12-15 19:44:48.141385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.973 [2024-12-15 19:44:48.141402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:42.973 [2024-12-15 19:44:48.141423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:113704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.973 [2024-12-15 19:44:48.141438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:42.973 [2024-12-15 19:44:48.141459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:113712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.973 [2024-12-15 19:44:48.141475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:42.973 [2024-12-15 19:44:48.141495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.973 [2024-12-15 19:44:48.141545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:42.973 [2024-12-15 19:44:48.141596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.973 [2024-12-15 19:44:48.141611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:42.973 [2024-12-15 19:44:48.141629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:113152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.973 [2024-12-15 19:44:48.141642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:42.973 [2024-12-15 19:44:48.141660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:113160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.973 [2024-12-15 19:44:48.141673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:42.973 [2024-12-15 19:44:48.141691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:113192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.973 [2024-12-15 19:44:48.141704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:42.973 [2024-12-15 19:44:48.141723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.973 [2024-12-15 19:44:48.141735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:42.973 [2024-12-15 19:44:48.141754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.973 [2024-12-15 19:44:48.141766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:42.973 [2024-12-15 19:44:48.141785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.973 [2024-12-15 19:44:48.141798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:42.973 [2024-12-15 19:44:48.141816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:113720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.973 [2024-12-15 19:44:48.141872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:42.973 [2024-12-15 19:44:48.141894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:113728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.973 [2024-12-15 19:44:48.141910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:42.973 [2024-12-15 19:44:48.141945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:113736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.973 [2024-12-15 19:44:48.141964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:42.973 [2024-12-15 19:44:48.141985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:113744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.973 [2024-12-15 19:44:48.142000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:42.973 [2024-12-15 19:44:48.142021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.973 [2024-12-15 19:44:48.142044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:42.973 [2024-12-15 19:44:48.142066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:113760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.973 [2024-12-15 19:44:48.142084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:42.973 [2024-12-15 19:44:48.142104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:113768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.973 [2024-12-15 19:44:48.142119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:42.974 [2024-12-15 19:44:48.142140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:113776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.974 [2024-12-15 19:44:48.142155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:42.974 [2024-12-15 19:44:48.142176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:113784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.974 [2024-12-15 19:44:48.142211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.974 [2024-12-15 19:44:48.142246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.974 [2024-12-15 19:44:48.142260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:42.974 [2024-12-15 19:44:48.142281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.974 [2024-12-15 19:44:48.142294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.974 [2024-12-15 19:44:48.142314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.974 [2024-12-15 19:44:48.142328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:42.974 [2024-12-15 19:44:48.142348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:113816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.974 [2024-12-15 19:44:48.142391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:42.974 [2024-12-15 19:44:48.142414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.974 [2024-12-15 19:44:48.142430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:42.974 [2024-12-15 19:44:48.142451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:113264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.974 [2024-12-15 19:44:48.142465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:42.974 [2024-12-15 19:44:48.142487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.974 [2024-12-15 19:44:48.142501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:42.974 [2024-12-15 19:44:48.142523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.974 [2024-12-15 19:44:48.142551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:42.974 [2024-12-15 19:44:48.142589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.974 [2024-12-15 19:44:48.142607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:42.974 [2024-12-15 19:44:48.142628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:113296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.974 [2024-12-15 19:44:48.142643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:42.974 [2024-12-15 19:44:48.142665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.974 [2024-12-15 19:44:48.142680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:42.974 [2024-12-15 19:44:48.142719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:113312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.974 [2024-12-15 19:44:48.142732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:42.974 [2024-12-15 19:44:48.142751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:113320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.974 [2024-12-15 19:44:48.142765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:42.974 [2024-12-15 19:44:48.142784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:113328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.974 [2024-12-15 19:44:48.142797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:42.974 [2024-12-15 19:44:48.142816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.974 [2024-12-15 19:44:48.142856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:42.974 [2024-12-15 19:44:48.142888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.974 [2024-12-15 19:44:48.142906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:42.974 [2024-12-15 19:44:48.142927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.974 [2024-12-15 19:44:48.142942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:42.974 [2024-12-15 19:44:48.142961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:113360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.974 [2024-12-15 19:44:48.142975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:42.974 [2024-12-15 19:44:48.142995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.974 [2024-12-15 19:44:48.143009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:42.974 [2024-12-15 19:44:48.143028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:113376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.974 [2024-12-15 19:44:48.143042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:42.974 [2024-12-15 19:44:48.143070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.974 [2024-12-15 19:44:48.143085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:42.974 [2024-12-15 19:44:48.143105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.974 [2024-12-15 19:44:48.143119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:42.974 [2024-12-15 19:44:48.143138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.974 [2024-12-15 19:44:48.143152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:42.974 [2024-12-15 19:44:48.143173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:113408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.974 [2024-12-15 19:44:48.143216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:42.974 [2024-12-15 19:44:48.143251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.974 [2024-12-15 19:44:48.143266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:42.974 [2024-12-15 19:44:48.143286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:113424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.974 [2024-12-15 19:44:48.143299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:42.974 [2024-12-15 19:44:48.143319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:113432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.974 [2024-12-15 19:44:48.143333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:42.974 [2024-12-15 19:44:48.143353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:113440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.974 [2024-12-15 19:44:48.143367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:42.974 [2024-12-15 19:44:48.143386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.974 [2024-12-15 19:44:48.143400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:42.974 [2024-12-15 19:44:48.143420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:113456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.974 [2024-12-15 19:44:48.143434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:42.974 [2024-12-15 19:44:48.144026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:113464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.974 [2024-12-15 19:44:48.144052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:42.974 [2024-12-15 19:44:48.144077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:113824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.974 [2024-12-15 19:44:48.144093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:42.974 [2024-12-15 19:44:48.144123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:113832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.974 [2024-12-15 19:44:48.144139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:42.974 [2024-12-15 19:44:48.144158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:113840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.974 [2024-12-15 19:44:48.144171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.974 [2024-12-15 19:44:48.144221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:113848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.974 [2024-12-15 19:44:48.144235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:42.974 [2024-12-15 19:44:48.144254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:113856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.974 [2024-12-15 19:44:48.144267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:42.974 [2024-12-15 19:44:48.144287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:113864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.974 [2024-12-15 19:44:48.144300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:42.974 [2024-12-15 19:44:48.144320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.974 [2024-12-15 19:44:48.144334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:42.975 [2024-12-15 19:44:48.144353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:113880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.975 [2024-12-15 19:44:48.144366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:42.975 [2024-12-15 19:44:48.144386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:113888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.975 [2024-12-15 19:44:48.144400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:42.975 [2024-12-15 19:44:48.144420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.975 [2024-12-15 19:44:48.144434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:42.975 [2024-12-15 19:44:48.144453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:113904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.975 [2024-12-15 19:44:48.144466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:42.975 [2024-12-15 19:44:48.144486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.975 [2024-12-15 19:44:48.144500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:42.975 [2024-12-15 19:44:48.144520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:113920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.975 [2024-12-15 19:44:48.144534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:42.975 [2024-12-15 19:44:48.144584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:113928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.975 [2024-12-15 19:44:48.144604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:42.975 [2024-12-15 19:44:48.144624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.975 [2024-12-15 19:44:48.144638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:42.975 [2024-12-15 19:44:48.144671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:113944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.975 [2024-12-15 19:44:48.144685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:42.975 [2024-12-15 19:44:48.144703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:113952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.975 [2024-12-15 19:44:48.144715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:42.975 [2024-12-15 19:44:48.144733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:113472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.975 [2024-12-15 19:44:48.144746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:42.975 [2024-12-15 19:44:48.144764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:113480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.975 [2024-12-15 19:44:48.144777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:42.975 [2024-12-15 19:44:48.144795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.975 [2024-12-15 19:44:48.144808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:42.975 [2024-12-15 19:44:48.144825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.975 [2024-12-15 19:44:48.144838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:42.975 [2024-12-15 19:44:48.144856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.975 [2024-12-15 19:44:48.144869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:42.975 [2024-12-15 19:44:48.144888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:112680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.975 [2024-12-15 19:44:48.144916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:42.975 [2024-12-15 19:44:48.144936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:112704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.975 [2024-12-15 19:44:48.144949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:42.975 [2024-12-15 19:44:48.144967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.975 [2024-12-15 19:44:48.144980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:42.975 [2024-12-15 19:44:48.144998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:112728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.975 [2024-12-15 19:44:48.145020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:42.975 [2024-12-15 19:44:48.145039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.975 [2024-12-15 19:44:48.145053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:42.975 [2024-12-15 19:44:48.145071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:113488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.975 [2024-12-15 19:44:48.145090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:42.975 [2024-12-15 19:44:48.145109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:113496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.975 [2024-12-15 19:44:48.145123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:42.975 [2024-12-15 19:44:48.145140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:113504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.975 [2024-12-15 19:44:48.145153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:42.975 [2024-12-15 19:44:48.145171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.975 [2024-12-15 19:44:48.145184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:42.975 [2024-12-15 19:44:48.145220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.975 [2024-12-15 19:44:48.145234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:42.975 [2024-12-15 19:44:48.145254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:113528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.975 [2024-12-15 19:44:48.145268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:42.975 [2024-12-15 19:44:48.145287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.975 [2024-12-15 19:44:48.145301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:42.975 [2024-12-15 19:44:48.145321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:113544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.975 [2024-12-15 19:44:48.145334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.975 [2024-12-15 19:44:48.145354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:112816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.975 [2024-12-15 19:44:48.145368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:42.975 [2024-12-15 19:44:48.145388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.975 [2024-12-15 19:44:48.145401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:42.975 [2024-12-15 19:44:48.145421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:112832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.975 [2024-12-15 19:44:48.145441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:42.975 [2024-12-15 19:44:48.145462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.975 [2024-12-15 19:44:48.145477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:42.975 [2024-12-15 19:44:48.145497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.975 [2024-12-15 19:44:48.145526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:42.975 [2024-12-15 19:44:48.145561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:112864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.975 [2024-12-15 19:44:48.145588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:42.975 [2024-12-15 19:44:48.145606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:112880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.975 [2024-12-15 19:44:48.145619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:42.975 [2024-12-15 19:44:48.145637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:112904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.975 [2024-12-15 19:44:48.145650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:42.975 [2024-12-15 19:44:48.145668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.975 [2024-12-15 19:44:48.145685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:42.975 [2024-12-15 19:44:48.145703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.976 [2024-12-15 19:44:48.145716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:42.976 [2024-12-15 19:44:48.145735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.976 [2024-12-15 19:44:48.145747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:42.976 [2024-12-15 19:44:48.145771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:112936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.976 [2024-12-15 19:44:48.145784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:42.976 [2024-12-15 19:44:48.145802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:112944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.976 [2024-12-15 19:44:48.145815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:42.976 [2024-12-15 19:44:48.145850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:112984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.976 [2024-12-15 19:44:48.145863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:42.976 [2024-12-15 19:44:48.145881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.976 [2024-12-15 19:44:48.145905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:42.976 [2024-12-15 19:44:48.145936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:113000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.976 [2024-12-15 19:44:48.145951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:42.976 [2024-12-15 19:44:48.145969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:113552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.976 [2024-12-15 19:44:48.145982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:42.976 [2024-12-15 19:44:48.146001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:113560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.976 [2024-12-15 19:44:48.146014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:42.976 [2024-12-15 19:44:48.146033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:113568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.976 [2024-12-15 19:44:48.146046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:42.976 [2024-12-15 19:44:48.146064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:113576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.976 [2024-12-15 19:44:48.146077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:42.976 [2024-12-15 19:44:48.146095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:113016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.976 [2024-12-15 19:44:48.146109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:42.976 [2024-12-15 19:44:48.146127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:113024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.976 [2024-12-15 19:44:48.146145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:42.976 [2024-12-15 19:44:48.146164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.976 [2024-12-15 19:44:48.146177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:42.976 [2024-12-15 19:44:48.146212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.976 [2024-12-15 19:44:48.146226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:42.976 [2024-12-15 19:44:48.146246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:113048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.976 [2024-12-15 19:44:48.146265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:42.976 [2024-12-15 19:44:48.146285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.976 [2024-12-15 19:44:48.146299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:42.976 [2024-12-15 19:44:48.146319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:113064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.976 [2024-12-15 19:44:48.146333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:42.976 [2024-12-15 19:44:48.146394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:113088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.976 [2024-12-15 19:44:48.146413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:42.976 [2024-12-15 19:44:48.146435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:113584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.976 [2024-12-15 19:44:48.146450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:42.976 [2024-12-15 19:44:48.146471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:113592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.976 [2024-12-15 19:44:48.146486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:42.976 [2024-12-15 19:44:48.146507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.976 [2024-12-15 19:44:48.146521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:42.976 [2024-12-15 19:44:48.146543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:113608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.976 [2024-12-15 19:44:48.146557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.976 [2024-12-15 19:44:48.146578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:113616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.976 [2024-12-15 19:44:48.146593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:42.976 [2024-12-15 19:44:48.146614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:113624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.976 [2024-12-15 19:44:48.146644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:42.976 [2024-12-15 19:44:48.146679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.976 [2024-12-15 19:44:48.146708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:42.976 [2024-12-15 19:44:48.146726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:113640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.976 [2024-12-15 19:44:48.146739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:42.976 [2024-12-15 19:44:48.146758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:113648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.976 [2024-12-15 19:44:48.146771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:42.976 [2024-12-15 19:44:48.146790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:113656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.976 [2024-12-15 19:44:48.146803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:42.976 [2024-12-15 19:44:48.146836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:113664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.976 [2024-12-15 19:44:48.146849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:42.976 [2024-12-15 19:44:48.146867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:113672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.976 [2024-12-15 19:44:48.146887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:42.976 [2024-12-15 19:44:48.147723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:113680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.976 [2024-12-15 19:44:48.147749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:42.976 [2024-12-15 19:44:48.147773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:113688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.976 [2024-12-15 19:44:48.147788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:42.976 [2024-12-15 19:44:48.147806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:113696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.977 [2024-12-15 19:44:48.147820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:42.977 [2024-12-15 19:44:48.147844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:113704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.977 [2024-12-15 19:44:48.147858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:42.977 [2024-12-15 19:44:48.147876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:113712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.977 [2024-12-15 19:44:48.147889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:42.977 [2024-12-15 19:44:48.147908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.977 [2024-12-15 19:44:48.147936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:42.977 [2024-12-15 19:44:48.147960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.977 [2024-12-15 19:44:48.147973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:42.977 [2024-12-15 19:44:48.147991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.977 [2024-12-15 19:44:48.148005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:42.977 [2024-12-15 19:44:48.148023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:113160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.977 [2024-12-15 19:44:48.148035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:42.977 [2024-12-15 19:44:48.148053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.977 [2024-12-15 19:44:48.148066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:42.977 [2024-12-15 19:44:48.148084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.977 [2024-12-15 19:44:48.148097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:42.977 [2024-12-15 19:44:48.148115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.977 [2024-12-15 19:44:48.148138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:42.977 [2024-12-15 19:44:48.148158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:113240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.977 [2024-12-15 19:44:48.148171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:42.977 [2024-12-15 19:44:48.148207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:113720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.977 [2024-12-15 19:44:48.148221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:42.977 [2024-12-15 19:44:48.148240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.977 [2024-12-15 19:44:48.148254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:42.977 [2024-12-15 19:44:48.148273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:113736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.977 [2024-12-15 19:44:48.148287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:42.977 [2024-12-15 19:44:48.148307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:113744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.977 [2024-12-15 19:44:48.148321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:42.977 [2024-12-15 19:44:48.148340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:113752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.977 [2024-12-15 19:44:48.148354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:42.977 [2024-12-15 19:44:48.148374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:113760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.977 [2024-12-15 19:44:48.148387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:42.977 [2024-12-15 19:44:48.148407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:113768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.977 [2024-12-15 19:44:48.148421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:42.977 [2024-12-15 19:44:48.148441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.977 [2024-12-15 19:44:48.148454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:42.977 [2024-12-15 19:44:48.148474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.977 [2024-12-15 19:44:48.148488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.977 [2024-12-15 19:44:48.148507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.977 [2024-12-15 19:44:48.148521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:42.977 [2024-12-15 19:44:48.148585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:113800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.977 [2024-12-15 19:44:48.148606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.977 [2024-12-15 19:44:48.148626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:113808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.977 [2024-12-15 19:44:48.148639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:42.977 [2024-12-15 19:44:48.148674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:113816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.977 [2024-12-15 19:44:48.148687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:42.977 [2024-12-15 19:44:48.148706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.977 [2024-12-15 19:44:48.148719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:42.977 [2024-12-15 19:44:48.148737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.977 [2024-12-15 19:44:48.148750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:42.977 [2024-12-15 19:44:48.148769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.977 [2024-12-15 19:44:48.148782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:42.977 [2024-12-15 19:44:48.148800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.977 [2024-12-15 19:44:48.148813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:42.977 [2024-12-15 19:44:48.148832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:113288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.977 [2024-12-15 19:44:48.148845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:42.977 [2024-12-15 19:44:48.148864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:113296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.977 [2024-12-15 19:44:48.148877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:42.977 [2024-12-15 19:44:48.148907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.977 [2024-12-15 19:44:48.148925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:42.977 [2024-12-15 19:44:48.148944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:113312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.977 [2024-12-15 19:44:48.148958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:42.977 [2024-12-15 19:44:48.156717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.977 [2024-12-15 19:44:48.156750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:42.977 [2024-12-15 19:44:48.156773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:113328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.977 [2024-12-15 19:44:48.156787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:42.977 [2024-12-15 19:44:48.156867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.977 [2024-12-15 19:44:48.156888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:42.977 [2024-12-15 19:44:48.156911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:113344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.977 [2024-12-15 19:44:48.156926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:42.977 [2024-12-15 19:44:48.156947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.977 [2024-12-15 19:44:48.156961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:42.977 [2024-12-15 19:44:48.156982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.977 [2024-12-15 19:44:48.156996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:42.977 [2024-12-15 19:44:48.157017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.977 [2024-12-15 19:44:48.157031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:42.977 [2024-12-15 19:44:48.157052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:113376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.977 [2024-12-15 19:44:48.157066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:42.978 [2024-12-15 19:44:48.157087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.978 [2024-12-15 19:44:48.157102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:42.978 [2024-12-15 19:44:48.157122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.978 [2024-12-15 19:44:48.157140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:42.978 [2024-12-15 19:44:48.157176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.978 [2024-12-15 19:44:48.157206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:42.978 [2024-12-15 19:44:48.157257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:113408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.978 [2024-12-15 19:44:48.157270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:42.978 [2024-12-15 19:44:48.157305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.978 [2024-12-15 19:44:48.157318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:42.978 [2024-12-15 19:44:48.157338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:113424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.978 [2024-12-15 19:44:48.157351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:42.978 [2024-12-15 19:44:48.157390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:113432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.978 [2024-12-15 19:44:48.157405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:42.978 [2024-12-15 19:44:48.157424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.978 [2024-12-15 19:44:48.157438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:42.978 [2024-12-15 19:44:48.157458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.978 [2024-12-15 19:44:48.157471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:42.978 [2024-12-15 19:44:48.158274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:113456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.978 [2024-12-15 19:44:48.158323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:42.978 [2024-12-15 19:44:48.158351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:113464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.978 [2024-12-15 19:44:48.158413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:42.978 [2024-12-15 19:44:48.158437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:113824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.978 [2024-12-15 19:44:48.158452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:42.978 [2024-12-15 19:44:48.158473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:113832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.978 [2024-12-15 19:44:48.158488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:42.978 [2024-12-15 19:44:48.158509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:113840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.978 [2024-12-15 19:44:48.158523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.978 [2024-12-15 19:44:48.158544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.978 [2024-12-15 19:44:48.158559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:42.978 [2024-12-15 19:44:48.158579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:113856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.978 [2024-12-15 19:44:48.158593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:42.978 [2024-12-15 19:44:48.158614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:113864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.978 [2024-12-15 19:44:48.158628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:42.978 [2024-12-15 19:44:48.158649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.978 [2024-12-15 19:44:48.158664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:42.978 [2024-12-15 19:44:48.158684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:113880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.978 [2024-12-15 19:44:48.158713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:42.978 [2024-12-15 19:44:48.158736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:113888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.978 [2024-12-15 19:44:48.158751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:42.978 [2024-12-15 19:44:48.158773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:113896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.978 [2024-12-15 19:44:48.158787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:42.978 [2024-12-15 19:44:48.158808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:113904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.978 [2024-12-15 19:44:48.158837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:42.978 [2024-12-15 19:44:48.158861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:113912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.978 [2024-12-15 19:44:48.158876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:42.978 [2024-12-15 19:44:48.158915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:113920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.978 [2024-12-15 19:44:48.158933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:42.978 [2024-12-15 19:44:48.158960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:113928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.978 [2024-12-15 19:44:48.158978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:42.978 [2024-12-15 19:44:48.159004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:113936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.978 [2024-12-15 19:44:48.159023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:42.978 [2024-12-15 19:44:48.159049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:113944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.978 [2024-12-15 19:44:48.159067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:42.978 [2024-12-15 19:44:48.159093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:113952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.978 [2024-12-15 19:44:48.159111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:42.978 [2024-12-15 19:44:48.159138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:113472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.978 [2024-12-15 19:44:48.159156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:42.978 [2024-12-15 19:44:48.159182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.978 [2024-12-15 19:44:48.159207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:42.978 [2024-12-15 19:44:48.159234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.978 [2024-12-15 19:44:48.159264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:42.978 [2024-12-15 19:44:48.159292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.978 [2024-12-15 19:44:48.159311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:42.978 [2024-12-15 19:44:48.159337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:112672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.978 [2024-12-15 19:44:48.159356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:42.978 [2024-12-15 19:44:48.159382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.978 [2024-12-15 19:44:48.159400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:42.978 [2024-12-15 19:44:48.159426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.978 [2024-12-15 19:44:48.159444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:42.978 [2024-12-15 19:44:48.159470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.978 [2024-12-15 19:44:48.159488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:42.978 [2024-12-15 19:44:48.159514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:112728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.978 [2024-12-15 19:44:48.159532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:42.978 [2024-12-15 19:44:48.159558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:112768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.978 [2024-12-15 19:44:48.159579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:42.978 [2024-12-15 19:44:48.159605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:113488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.978 [2024-12-15 19:44:48.159623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:42.979 [2024-12-15 19:44:48.159649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:113496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.979 [2024-12-15 19:44:48.159667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:42.979 [2024-12-15 19:44:48.159693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:113504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.979 [2024-12-15 19:44:48.159711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:42.979 [2024-12-15 19:44:48.159737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.979 [2024-12-15 19:44:48.159755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:42.979 [2024-12-15 19:44:48.159791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.979 [2024-12-15 19:44:48.159817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:42.979 [2024-12-15 19:44:48.159870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:113528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.979 [2024-12-15 19:44:48.159891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:42.979 [2024-12-15 19:44:48.159919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.979 [2024-12-15 19:44:48.159937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:42.979 [2024-12-15 19:44:48.159963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:113544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.979 [2024-12-15 19:44:48.159981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.979 [2024-12-15 19:44:48.160008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:112816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.979 [2024-12-15 19:44:48.160031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:42.979 [2024-12-15 19:44:48.160062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.979 [2024-12-15 19:44:48.160079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:42.979 [2024-12-15 19:44:48.160105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:112832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.979 [2024-12-15 19:44:48.160124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:42.979 [2024-12-15 19:44:48.160150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:112840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.979 [2024-12-15 19:44:48.160168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:42.979 [2024-12-15 19:44:48.160199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.979 [2024-12-15 19:44:48.160217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:42.979 [2024-12-15 19:44:48.160252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:112864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.979 [2024-12-15 19:44:48.160270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:42.979 [2024-12-15 19:44:48.160296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:112880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.979 [2024-12-15 19:44:48.160325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:42.979 [2024-12-15 19:44:48.160351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:112904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.979 [2024-12-15 19:44:48.160369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:42.979 [2024-12-15 19:44:48.160395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.979 [2024-12-15 19:44:48.160413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:42.979 [2024-12-15 19:44:48.160458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.979 [2024-12-15 19:44:48.160486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:42.979 [2024-12-15 19:44:48.160512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.979 [2024-12-15 19:44:48.160530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:42.979 [2024-12-15 19:44:48.160556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.979 [2024-12-15 19:44:48.160586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:42.979 [2024-12-15 19:44:48.160612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:112944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.979 [2024-12-15 19:44:48.160630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:42.979 [2024-12-15 19:44:48.160656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:112984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.979 [2024-12-15 19:44:48.160674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:42.979 [2024-12-15 19:44:48.160709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.979 [2024-12-15 19:44:48.160727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:42.979 [2024-12-15 19:44:48.160753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:113000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.979 [2024-12-15 19:44:48.160770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:42.979 [2024-12-15 19:44:48.160797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:113552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.979 [2024-12-15 19:44:48.160842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:42.979 [2024-12-15 19:44:48.160873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.979 [2024-12-15 19:44:48.160892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:42.979 [2024-12-15 19:44:48.160918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:113568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.979 [2024-12-15 19:44:48.160937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:42.979 [2024-12-15 19:44:48.160964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.979 [2024-12-15 19:44:48.160982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:42.979 [2024-12-15 19:44:48.161008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:113016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.979 [2024-12-15 19:44:48.161026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:42.979 [2024-12-15 19:44:48.161061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:113024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.979 [2024-12-15 19:44:48.161081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:42.979 [2024-12-15 19:44:48.161108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.979 [2024-12-15 19:44:48.161126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:42.979 [2024-12-15 19:44:48.161152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.979 [2024-12-15 19:44:48.161169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:42.979 [2024-12-15 19:44:48.161196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:113048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.979 [2024-12-15 19:44:48.161214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:42.979 [2024-12-15 19:44:48.161240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.979 [2024-12-15 19:44:48.161258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:42.979 [2024-12-15 19:44:48.161284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.979 [2024-12-15 19:44:48.161302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:42.979 [2024-12-15 19:44:48.161329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:113088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.979 [2024-12-15 19:44:48.161347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:42.979 [2024-12-15 19:44:48.161373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.979 [2024-12-15 19:44:48.161390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:42.979 [2024-12-15 19:44:48.161416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:113592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.979 [2024-12-15 19:44:48.161434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:42.979 [2024-12-15 19:44:48.161460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.979 [2024-12-15 19:44:48.161479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:42.980 [2024-12-15 19:44:48.161505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:113608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.980 [2024-12-15 19:44:48.161523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.980 [2024-12-15 19:44:48.161558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:113616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.980 [2024-12-15 19:44:48.161587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:42.980 [2024-12-15 19:44:48.161623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:113624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.980 [2024-12-15 19:44:48.161643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:42.980 [2024-12-15 19:44:48.161669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:113632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.980 [2024-12-15 19:44:48.161688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:42.980 [2024-12-15 19:44:48.161714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:113640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.980 [2024-12-15 19:44:48.161732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:42.980 [2024-12-15 19:44:48.161759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:113648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.980 [2024-12-15 19:44:48.161777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:42.980 [2024-12-15 19:44:48.161803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:113656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.980 [2024-12-15 19:44:48.161847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:42.980 [2024-12-15 19:44:48.161877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:113664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.980 [2024-12-15 19:44:48.161896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:42.980 [2024-12-15 19:44:48.162945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:113672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.980 [2024-12-15 19:44:48.162981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:42.980 [2024-12-15 19:44:48.163015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.980 [2024-12-15 19:44:48.163039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:42.980 [2024-12-15 19:44:48.163066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:113688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.980 [2024-12-15 19:44:48.163085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:42.980 [2024-12-15 19:44:48.163112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:113696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.980 [2024-12-15 19:44:48.163130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:42.980 [2024-12-15 19:44:48.163157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:113704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.980 [2024-12-15 19:44:48.163175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:42.980 [2024-12-15 19:44:48.163201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:113712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.980 [2024-12-15 19:44:48.163219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:42.980 [2024-12-15 19:44:48.163246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.980 [2024-12-15 19:44:48.163278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:42.980 [2024-12-15 19:44:48.163307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.980 [2024-12-15 19:44:48.163326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:42.980 [2024-12-15 19:44:48.163352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:113152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.980 [2024-12-15 19:44:48.163371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:42.980 [2024-12-15 19:44:48.163397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:113160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.980 [2024-12-15 19:44:48.163415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:42.980 [2024-12-15 19:44:48.163441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:113192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.980 [2024-12-15 19:44:48.163459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:42.980 [2024-12-15 19:44:48.163486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.980 [2024-12-15 19:44:48.163504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:42.980 [2024-12-15 19:44:48.163538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.980 [2024-12-15 19:44:48.163556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:42.980 [2024-12-15 19:44:48.163583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.980 [2024-12-15 19:44:48.163601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:42.980 [2024-12-15 19:44:48.163626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:113720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.980 [2024-12-15 19:44:48.163644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:42.980 [2024-12-15 19:44:48.163670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:113728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.980 [2024-12-15 19:44:48.163688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:42.980 [2024-12-15 19:44:48.163715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.980 [2024-12-15 19:44:48.163733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:42.980 [2024-12-15 19:44:48.163758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:113744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.980 [2024-12-15 19:44:48.163776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:42.980 [2024-12-15 19:44:48.163802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:113752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.980 [2024-12-15 19:44:48.163868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:42.980 [2024-12-15 19:44:48.163899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:113760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.980 [2024-12-15 19:44:48.163918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:42.980 [2024-12-15 19:44:48.163944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:113768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.980 [2024-12-15 19:44:48.163962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:42.980 [2024-12-15 19:44:48.163989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:113776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.980 [2024-12-15 19:44:48.164007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:42.980 [2024-12-15 19:44:48.164034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.980 [2024-12-15 19:44:48.164052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.980 [2024-12-15 19:44:48.164078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.980 [2024-12-15 19:44:48.164096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:42.980 [2024-12-15 19:44:48.164123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:113800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.980 [2024-12-15 19:44:48.164141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.980 [2024-12-15 19:44:48.164167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:113808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.980 [2024-12-15 19:44:48.164185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:42.980 [2024-12-15 19:44:48.164211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:113816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.980 [2024-12-15 19:44:48.164237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:42.981 [2024-12-15 19:44:48.164263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.981 [2024-12-15 19:44:48.164292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:42.981 [2024-12-15 19:44:48.164318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:113264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.981 [2024-12-15 19:44:48.164336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:42.981 [2024-12-15 19:44:48.164368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.981 [2024-12-15 19:44:48.164386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:42.981 [2024-12-15 19:44:48.164412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.981 [2024-12-15 19:44:48.164429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:42.981 [2024-12-15 19:44:48.164464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:113288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.981 [2024-12-15 19:44:48.164491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:42.981 [2024-12-15 19:44:48.164521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:113296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.981 [2024-12-15 19:44:48.164540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:42.981 [2024-12-15 19:44:48.164589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.981 [2024-12-15 19:44:48.164618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:42.981 [2024-12-15 19:44:48.164645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.981 [2024-12-15 19:44:48.164663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:42.981 [2024-12-15 19:44:48.164689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:113320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.981 [2024-12-15 19:44:48.164707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:42.981 [2024-12-15 19:44:48.164733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.981 [2024-12-15 19:44:48.164751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:42.981 [2024-12-15 19:44:48.164778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.981 [2024-12-15 19:44:48.164796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:42.981 [2024-12-15 19:44:48.164836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:113344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.981 [2024-12-15 19:44:48.164859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:42.981 [2024-12-15 19:44:48.164885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.981 [2024-12-15 19:44:48.164904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:42.981 [2024-12-15 19:44:48.164931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:113360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.981 [2024-12-15 19:44:48.164949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:42.981 [2024-12-15 19:44:48.164980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.981 [2024-12-15 19:44:48.164998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:42.981 [2024-12-15 19:44:48.165026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:113376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.981 [2024-12-15 19:44:48.165044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:42.981 [2024-12-15 19:44:48.165080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.981 [2024-12-15 19:44:48.165100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:42.981 [2024-12-15 19:44:48.165127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.981 [2024-12-15 19:44:48.165145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:42.981 [2024-12-15 19:44:48.165171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.981 [2024-12-15 19:44:48.165192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:42.981 [2024-12-15 19:44:48.165244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:113408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.981 [2024-12-15 19:44:48.165262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:42.981 [2024-12-15 19:44:48.165288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.981 [2024-12-15 19:44:48.165306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:42.981 [2024-12-15 19:44:48.165345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:113424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.981 [2024-12-15 19:44:48.165363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:42.981 [2024-12-15 19:44:48.165390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:113432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.981 [2024-12-15 19:44:48.165408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:42.981 [2024-12-15 19:44:48.165434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:113440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.981 [2024-12-15 19:44:48.165452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:42.981 [2024-12-15 19:44:48.166258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.981 [2024-12-15 19:44:48.166294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:42.981 [2024-12-15 19:44:48.166327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:113456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.981 [2024-12-15 19:44:48.166349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:42.981 [2024-12-15 19:44:48.166404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:113464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.981 [2024-12-15 19:44:48.166424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:42.981 [2024-12-15 19:44:48.166452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:113824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.981 [2024-12-15 19:44:48.166471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:42.981 [2024-12-15 19:44:48.166513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:113832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.981 [2024-12-15 19:44:48.166534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:42.981 [2024-12-15 19:44:48.166561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:113840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.981 [2024-12-15 19:44:48.166579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.981 [2024-12-15 19:44:48.166606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:113848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.981 [2024-12-15 19:44:48.166624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:42.981 [2024-12-15 19:44:48.166651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:113856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.981 [2024-12-15 19:44:48.166669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:42.981 [2024-12-15 19:44:48.166695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:113864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.981 [2024-12-15 19:44:48.166725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:42.981 [2024-12-15 19:44:48.166751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.981 [2024-12-15 19:44:48.166770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:42.981 [2024-12-15 19:44:48.166797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.981 [2024-12-15 19:44:48.166844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:42.981 [2024-12-15 19:44:48.166874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:113888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.981 [2024-12-15 19:44:48.166893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:42.981 [2024-12-15 19:44:48.166920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.981 [2024-12-15 19:44:48.166938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:42.981 [2024-12-15 19:44:48.166964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.982 [2024-12-15 19:44:48.166982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:42.982 [2024-12-15 19:44:48.167008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:113912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.982 [2024-12-15 19:44:48.167027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:42.982 [2024-12-15 19:44:48.167053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.982 [2024-12-15 19:44:48.167071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:42.982 [2024-12-15 19:44:48.167098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:113928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.982 [2024-12-15 19:44:48.167126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:42.982 [2024-12-15 19:44:48.167155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:113936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.982 [2024-12-15 19:44:48.167174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:42.982 [2024-12-15 19:44:48.167200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:113944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.982 [2024-12-15 19:44:48.167219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:42.982 [2024-12-15 19:44:48.167245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:113952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.982 [2024-12-15 19:44:48.167263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:42.982 [2024-12-15 19:44:48.167290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:113472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.982 [2024-12-15 19:44:48.167308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:42.982 [2024-12-15 19:44:48.167334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:113480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.982 [2024-12-15 19:44:48.167352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:42.982 [2024-12-15 19:44:48.167378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.982 [2024-12-15 19:44:48.167397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:42.982 [2024-12-15 19:44:48.167423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.982 [2024-12-15 19:44:48.167441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:42.982 [2024-12-15 19:44:48.167467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:112672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.982 [2024-12-15 19:44:48.167485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:42.982 [2024-12-15 19:44:48.167512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:112680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.982 [2024-12-15 19:44:48.167530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:42.982 [2024-12-15 19:44:48.167557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:112704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.982 [2024-12-15 19:44:48.167575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:42.982 [2024-12-15 19:44:48.167601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.982 [2024-12-15 19:44:48.167619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:42.982 [2024-12-15 19:44:48.167645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.982 [2024-12-15 19:44:48.167686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:42.982 [2024-12-15 19:44:48.167714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:112768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.982 [2024-12-15 19:44:48.167733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:42.982 [2024-12-15 19:44:48.167770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:113488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.982 [2024-12-15 19:44:48.167788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:42.982 [2024-12-15 19:44:48.167869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:113496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.982 [2024-12-15 19:44:48.167894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:42.982 [2024-12-15 19:44:48.167922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:113504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.982 [2024-12-15 19:44:48.167942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:42.982 [2024-12-15 19:44:48.167970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.982 [2024-12-15 19:44:48.167988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:42.982 [2024-12-15 19:44:48.168015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.982 [2024-12-15 19:44:48.168033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:42.982 [2024-12-15 19:44:48.168060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:113528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.982 [2024-12-15 19:44:48.168078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:42.982 [2024-12-15 19:44:48.168105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.982 [2024-12-15 19:44:48.168123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:42.982 [2024-12-15 19:44:48.168150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:113544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.982 [2024-12-15 19:44:48.168168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.982 [2024-12-15 19:44:48.168194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:112816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.982 [2024-12-15 19:44:48.168212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:42.982 [2024-12-15 19:44:48.168239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.982 [2024-12-15 19:44:48.168257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:42.982 [2024-12-15 19:44:48.168292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:112832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.982 [2024-12-15 19:44:48.168310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:42.982 [2024-12-15 19:44:48.168359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:112840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.982 [2024-12-15 19:44:48.168379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:42.982 [2024-12-15 19:44:48.168405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.982 [2024-12-15 19:44:48.168423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:42.982 [2024-12-15 19:44:48.168449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:112864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.982 [2024-12-15 19:44:48.168468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:42.982 [2024-12-15 19:44:48.168504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:112880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.982 [2024-12-15 19:44:48.168523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:42.982 [2024-12-15 19:44:48.168549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:112904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.982 [2024-12-15 19:44:48.168567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:42.982 [2024-12-15 19:44:48.168593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.982 [2024-12-15 19:44:48.168611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:42.982 [2024-12-15 19:44:48.168638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.982 [2024-12-15 19:44:48.168656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:42.982 [2024-12-15 19:44:48.168682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.982 [2024-12-15 19:44:48.168701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:42.982 [2024-12-15 19:44:48.168727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:112936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.982 [2024-12-15 19:44:48.168745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:42.982 [2024-12-15 19:44:48.168771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.982 [2024-12-15 19:44:48.168790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:42.982 [2024-12-15 19:44:48.168842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.982 [2024-12-15 19:44:48.168873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:42.983 [2024-12-15 19:44:48.168910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.983 [2024-12-15 19:44:48.168929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:42.983 [2024-12-15 19:44:48.168966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:113000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.983 [2024-12-15 19:44:48.168986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:42.983 [2024-12-15 19:44:48.169013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:113552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.983 [2024-12-15 19:44:48.169031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:42.983 [2024-12-15 19:44:48.169057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:113560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.983 [2024-12-15 19:44:48.169076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:42.983 [2024-12-15 19:44:48.169103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:113568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.983 [2024-12-15 19:44:48.169121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:42.983 [2024-12-15 19:44:48.169147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:113576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.983 [2024-12-15 19:44:48.169165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:42.983 [2024-12-15 19:44:48.169192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:113016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.983 [2024-12-15 19:44:48.169210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:42.983 [2024-12-15 19:44:48.169237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:113024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.983 [2024-12-15 19:44:48.169255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:42.983 [2024-12-15 19:44:48.169281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.983 [2024-12-15 19:44:48.169299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:42.983 [2024-12-15 19:44:48.169325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.983 [2024-12-15 19:44:48.169343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:42.983 [2024-12-15 19:44:48.169370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:113048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.983 [2024-12-15 19:44:48.169388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:42.983 [2024-12-15 19:44:48.169414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.983 [2024-12-15 19:44:48.169432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:42.983 [2024-12-15 19:44:48.169458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:113064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.983 [2024-12-15 19:44:48.169477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:42.983 [2024-12-15 19:44:48.169522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:113088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.983 [2024-12-15 19:44:48.169542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:42.983 [2024-12-15 19:44:48.169575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:113584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.983 [2024-12-15 19:44:48.169594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:42.983 [2024-12-15 19:44:48.169622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:113592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.983 [2024-12-15 19:44:48.169640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:42.983 [2024-12-15 19:44:48.169666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.983 [2024-12-15 19:44:48.169684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:42.983 [2024-12-15 19:44:48.169711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:113608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.983 [2024-12-15 19:44:48.169730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.983 [2024-12-15 19:44:48.169756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.983 [2024-12-15 19:44:48.169774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:42.983 [2024-12-15 19:44:48.169800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:113624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.983 [2024-12-15 19:44:48.169844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:42.983 [2024-12-15 19:44:48.169878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:113632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.983 [2024-12-15 19:44:48.169899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:42.983 [2024-12-15 19:44:48.169928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:113640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.983 [2024-12-15 19:44:48.169945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:42.983 [2024-12-15 19:44:48.169972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:113648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.983 [2024-12-15 19:44:48.169991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:42.983 [2024-12-15 19:44:48.170017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:113656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.983 [2024-12-15 19:44:48.170036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:42.983 [2024-12-15 19:44:48.171038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:113664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.983 [2024-12-15 19:44:48.171069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:42.983 [2024-12-15 19:44:48.171096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:113672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.983 [2024-12-15 19:44:48.171125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:42.983 [2024-12-15 19:44:48.171149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:113680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.983 [2024-12-15 19:44:48.171165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:42.983 [2024-12-15 19:44:48.171186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:113688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.983 [2024-12-15 19:44:48.171217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:42.983 [2024-12-15 19:44:48.171253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:113696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.983 [2024-12-15 19:44:48.171266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:42.983 [2024-12-15 19:44:48.171302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:113704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.983 [2024-12-15 19:44:48.171316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:42.983 [2024-12-15 19:44:48.171336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:113712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.983 [2024-12-15 19:44:48.171350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:42.983 [2024-12-15 19:44:48.171370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.983 [2024-12-15 19:44:48.171384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:42.983 [2024-12-15 19:44:48.171404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.983 [2024-12-15 19:44:48.171418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:42.983 [2024-12-15 19:44:48.171437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.983 [2024-12-15 19:44:48.171451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:42.983 [2024-12-15 19:44:48.171471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:113160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.983 [2024-12-15 19:44:48.171485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:42.983 [2024-12-15 19:44:48.171520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:113192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.983 [2024-12-15 19:44:48.171534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:42.984 [2024-12-15 19:44:48.171553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.984 [2024-12-15 19:44:48.171568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:42.984 [2024-12-15 19:44:48.171587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.984 [2024-12-15 19:44:48.171608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:42.984 [2024-12-15 19:44:48.171644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.984 [2024-12-15 19:44:48.171657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:42.984 [2024-12-15 19:44:48.171676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.984 [2024-12-15 19:44:48.171690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:42.984 [2024-12-15 19:44:48.171723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.984 [2024-12-15 19:44:48.171736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:42.984 [2024-12-15 19:44:48.171754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:113736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.984 [2024-12-15 19:44:48.171767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:42.984 [2024-12-15 19:44:48.171785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:113744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.984 [2024-12-15 19:44:48.171798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:42.984 [2024-12-15 19:44:48.171832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:113752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.984 [2024-12-15 19:44:48.171863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:42.984 [2024-12-15 19:44:48.171883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.984 [2024-12-15 19:44:48.171898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:42.984 [2024-12-15 19:44:48.171935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.984 [2024-12-15 19:44:48.171966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:42.984 [2024-12-15 19:44:48.171992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:113776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.984 [2024-12-15 19:44:48.172009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:42.984 [2024-12-15 19:44:48.172030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:113784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.984 [2024-12-15 19:44:48.172045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.984 [2024-12-15 19:44:48.172066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.984 [2024-12-15 19:44:48.172081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:42.984 [2024-12-15 19:44:48.172102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:113800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.984 [2024-12-15 19:44:48.172117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.984 [2024-12-15 19:44:48.172147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:113808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.984 [2024-12-15 19:44:48.172163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:42.984 [2024-12-15 19:44:48.172185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.984 [2024-12-15 19:44:48.172200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:42.984 [2024-12-15 19:44:48.172221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.984 [2024-12-15 19:44:48.172237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:42.984 [2024-12-15 19:44:48.172258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:113264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.984 [2024-12-15 19:44:48.172273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:42.984 [2024-12-15 19:44:48.172294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.984 [2024-12-15 19:44:48.172309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:42.984 [2024-12-15 19:44:48.172346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.984 [2024-12-15 19:44:48.172375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:42.984 [2024-12-15 19:44:48.172395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:113288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.984 [2024-12-15 19:44:48.172409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:42.984 [2024-12-15 19:44:48.172429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:113296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.984 [2024-12-15 19:44:48.172443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:42.984 [2024-12-15 19:44:48.172463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.984 [2024-12-15 19:44:48.172477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:42.984 [2024-12-15 19:44:48.172497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:113312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.984 [2024-12-15 19:44:48.172511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:42.984 [2024-12-15 19:44:48.172530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:113320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.984 [2024-12-15 19:44:48.172544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:42.984 [2024-12-15 19:44:48.172580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.984 [2024-12-15 19:44:48.172608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:42.984 [2024-12-15 19:44:48.172634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.984 [2024-12-15 19:44:48.172649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:42.984 [2024-12-15 19:44:48.172667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.984 [2024-12-15 19:44:48.172681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:42.984 [2024-12-15 19:44:48.172714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.984 [2024-12-15 19:44:48.172728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:42.984 [2024-12-15 19:44:48.172746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:113360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.984 [2024-12-15 19:44:48.172758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:42.984 [2024-12-15 19:44:48.172776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.984 [2024-12-15 19:44:48.172789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:42.984 [2024-12-15 19:44:48.172808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:113376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.984 [2024-12-15 19:44:48.172821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:42.984 [2024-12-15 19:44:48.172871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.984 [2024-12-15 19:44:48.172885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:42.984 [2024-12-15 19:44:48.172904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.984 [2024-12-15 19:44:48.172928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:42.984 [2024-12-15 19:44:48.172968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.984 [2024-12-15 19:44:48.172983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:42.984 [2024-12-15 19:44:48.173003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:113408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.984 [2024-12-15 19:44:48.173018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:42.984 [2024-12-15 19:44:48.173038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.984 [2024-12-15 19:44:48.173052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:42.984 [2024-12-15 19:44:48.173072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.985 [2024-12-15 19:44:48.173086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:42.985 [2024-12-15 19:44:48.173114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:113432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.985 [2024-12-15 19:44:48.173130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:42.985 [2024-12-15 19:44:48.173672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:113440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.985 [2024-12-15 19:44:48.173696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:42.985 [2024-12-15 19:44:48.173718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.985 [2024-12-15 19:44:48.173732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:42.985 [2024-12-15 19:44:48.173751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:113456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.985 [2024-12-15 19:44:48.173765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:42.985 [2024-12-15 19:44:48.173784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:113464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.985 [2024-12-15 19:44:48.173797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:42.985 [2024-12-15 19:44:48.173815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:113824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.985 [2024-12-15 19:44:48.173844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:42.985 [2024-12-15 19:44:48.173880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.985 [2024-12-15 19:44:48.173893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:42.985 [2024-12-15 19:44:48.173926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:113840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.985 [2024-12-15 19:44:48.173943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.985 [2024-12-15 19:44:48.173963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:113848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.985 [2024-12-15 19:44:48.173977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:42.985 [2024-12-15 19:44:48.173996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:113856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.985 [2024-12-15 19:44:48.174011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:42.985 [2024-12-15 19:44:48.174030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:113864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.985 [2024-12-15 19:44:48.174043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:42.985 [2024-12-15 19:44:48.174062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.985 [2024-12-15 19:44:48.174076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:42.985 [2024-12-15 19:44:48.174096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:113880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.985 [2024-12-15 19:44:48.174122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:42.985 [2024-12-15 19:44:48.174143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:113888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.985 [2024-12-15 19:44:48.174158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:42.985 [2024-12-15 19:44:48.174177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:113896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.985 [2024-12-15 19:44:48.174206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:42.985 [2024-12-15 19:44:48.174240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:113904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.985 [2024-12-15 19:44:48.174253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:42.985 [2024-12-15 19:44:48.174271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:113912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.985 [2024-12-15 19:44:48.174284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:42.985 [2024-12-15 19:44:48.174302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:113920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.985 [2024-12-15 19:44:48.174315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:42.985 [2024-12-15 19:44:48.174334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:113928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.985 [2024-12-15 19:44:48.174347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:42.985 [2024-12-15 19:44:48.174395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:113936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.985 [2024-12-15 19:44:48.174413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:42.985 [2024-12-15 19:44:48.174434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:113944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.985 [2024-12-15 19:44:48.174450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:42.985 [2024-12-15 19:44:48.174471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.985 [2024-12-15 19:44:48.174486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:42.985 [2024-12-15 19:44:48.174507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:113472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.985 [2024-12-15 19:44:48.174522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:42.985 [2024-12-15 19:44:48.174543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:113480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.985 [2024-12-15 19:44:48.174558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:42.985 [2024-12-15 19:44:48.174579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.985 [2024-12-15 19:44:48.174602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:42.985 [2024-12-15 19:44:48.174624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.985 [2024-12-15 19:44:48.174640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:42.985 [2024-12-15 19:44:48.174661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.985 [2024-12-15 19:44:48.174676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:42.985 [2024-12-15 19:44:48.174738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:112680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.985 [2024-12-15 19:44:48.174766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:42.985 [2024-12-15 19:44:48.174785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:112704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.985 [2024-12-15 19:44:48.174798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:42.985 [2024-12-15 19:44:48.174817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.985 [2024-12-15 19:44:48.174842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:42.985 [2024-12-15 19:44:48.174877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:112728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.985 [2024-12-15 19:44:48.174890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:42.985 [2024-12-15 19:44:48.174909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:112768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.985 [2024-12-15 19:44:48.174934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:42.985 [2024-12-15 19:44:48.174955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:113488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.986 [2024-12-15 19:44:48.174968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:42.986 [2024-12-15 19:44:48.174987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:113496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.986 [2024-12-15 19:44:48.175001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:42.986 [2024-12-15 19:44:48.175020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:113504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.986 [2024-12-15 19:44:48.175033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:42.986 [2024-12-15 19:44:48.175052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.986 [2024-12-15 19:44:48.175065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:42.986 [2024-12-15 19:44:48.175084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.986 [2024-12-15 19:44:48.175097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:42.986 [2024-12-15 19:44:48.175124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:113528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.986 [2024-12-15 19:44:48.175138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:42.986 [2024-12-15 19:44:48.175157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.986 [2024-12-15 19:44:48.175170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:42.986 [2024-12-15 19:44:48.175203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:113544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.986 [2024-12-15 19:44:48.175216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.986 [2024-12-15 19:44:48.175234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:112816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.986 [2024-12-15 19:44:48.175248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:42.986 [2024-12-15 19:44:48.175266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:112824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.986 [2024-12-15 19:44:48.175279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:42.986 [2024-12-15 19:44:48.175297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:112832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.986 [2024-12-15 19:44:48.175310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:42.986 [2024-12-15 19:44:48.175328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:112840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.986 [2024-12-15 19:44:48.175341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:42.986 [2024-12-15 19:44:48.175359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.986 [2024-12-15 19:44:48.175372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:42.986 [2024-12-15 19:44:48.175390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:112864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.986 [2024-12-15 19:44:48.175403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:42.986 [2024-12-15 19:44:48.175421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.986 [2024-12-15 19:44:48.175433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:42.986 [2024-12-15 19:44:48.175451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:112904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.986 [2024-12-15 19:44:48.175464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:42.986 [2024-12-15 19:44:48.175482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.986 [2024-12-15 19:44:48.175495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:42.986 [2024-12-15 19:44:48.175520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.986 [2024-12-15 19:44:48.175534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:42.986 [2024-12-15 19:44:48.175552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.986 [2024-12-15 19:44:48.175565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:42.986 [2024-12-15 19:44:48.175583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:112936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.986 [2024-12-15 19:44:48.175596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:42.986 [2024-12-15 19:44:48.175614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:112944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.986 [2024-12-15 19:44:48.175627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:42.986 [2024-12-15 19:44:48.175645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:112984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.986 [2024-12-15 19:44:48.175659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:42.986 [2024-12-15 19:44:48.175677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.986 [2024-12-15 19:44:48.175706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:42.986 [2024-12-15 19:44:48.175724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:113000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.986 [2024-12-15 19:44:48.175737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:42.986 [2024-12-15 19:44:48.175756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:113552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.986 [2024-12-15 19:44:48.175770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:42.986 [2024-12-15 19:44:48.175789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.986 [2024-12-15 19:44:48.175802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:42.986 [2024-12-15 19:44:48.175820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:113568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.986 [2024-12-15 19:44:48.175866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:42.986 [2024-12-15 19:44:48.175901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:113576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.986 [2024-12-15 19:44:48.175918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:42.986 [2024-12-15 19:44:48.175938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:113016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.986 [2024-12-15 19:44:48.175952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:42.986 [2024-12-15 19:44:48.175980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.986 [2024-12-15 19:44:48.175996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:42.986 [2024-12-15 19:44:48.176016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.986 [2024-12-15 19:44:48.176045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:42.986 [2024-12-15 19:44:48.176064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.986 [2024-12-15 19:44:48.176078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:42.986 [2024-12-15 19:44:48.176097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.986 [2024-12-15 19:44:48.176111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:42.986 [2024-12-15 19:44:48.176130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.986 [2024-12-15 19:44:48.176143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:42.986 [2024-12-15 19:44:48.176162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.986 [2024-12-15 19:44:48.176176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:42.986 [2024-12-15 19:44:48.176195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:113088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.986 [2024-12-15 19:44:48.176224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:42.987 [2024-12-15 19:44:48.176257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:113584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.987 [2024-12-15 19:44:48.176270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:42.987 [2024-12-15 19:44:48.176288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:113592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.987 [2024-12-15 19:44:48.176301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:42.987 [2024-12-15 19:44:48.176319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.987 [2024-12-15 19:44:48.176332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:42.987 [2024-12-15 19:44:48.176350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:113608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.987 [2024-12-15 19:44:48.176363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.987 [2024-12-15 19:44:48.176381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:113616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.987 [2024-12-15 19:44:48.176394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:42.987 [2024-12-15 19:44:48.176413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:113624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.987 [2024-12-15 19:44:48.176432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:42.987 [2024-12-15 19:44:48.176451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:113632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.987 [2024-12-15 19:44:48.176465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:42.987 [2024-12-15 19:44:48.176484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:113640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.987 [2024-12-15 19:44:48.176496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:42.987 [2024-12-15 19:44:48.176515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:113648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.987 [2024-12-15 19:44:48.176529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:42.987 [2024-12-15 19:44:48.177241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:113656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.987 [2024-12-15 19:44:48.177266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:42.987 [2024-12-15 19:44:48.177290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.987 [2024-12-15 19:44:48.177305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:42.987 [2024-12-15 19:44:48.177324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:113672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.987 [2024-12-15 19:44:48.177339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:42.987 [2024-12-15 19:44:48.177357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:113680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.987 [2024-12-15 19:44:48.177370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:42.987 [2024-12-15 19:44:48.177389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:113688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.987 [2024-12-15 19:44:48.177402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:42.987 [2024-12-15 19:44:48.177420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:113696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.987 [2024-12-15 19:44:48.177433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:42.987 [2024-12-15 19:44:48.177451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:113704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.987 [2024-12-15 19:44:48.177465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:42.987 [2024-12-15 19:44:48.177483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:113712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.987 [2024-12-15 19:44:48.177496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:42.987 [2024-12-15 19:44:48.177514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.987 [2024-12-15 19:44:48.177540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:42.987 [2024-12-15 19:44:48.177560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.987 [2024-12-15 19:44:48.177574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:42.987 [2024-12-15 19:44:48.177593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:113152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.987 [2024-12-15 19:44:48.177605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:42.987 [2024-12-15 19:44:48.177623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.987 [2024-12-15 19:44:48.177636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:42.987 [2024-12-15 19:44:48.177654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:113192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.987 [2024-12-15 19:44:48.177667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:42.987 [2024-12-15 19:44:48.177685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.987 [2024-12-15 19:44:48.177698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:42.987 [2024-12-15 19:44:48.177716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.987 [2024-12-15 19:44:48.177729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:42.987 [2024-12-15 19:44:48.177747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:113240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.987 [2024-12-15 19:44:48.177759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:42.987 [2024-12-15 19:44:48.177778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:113720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.987 [2024-12-15 19:44:48.177790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:42.987 [2024-12-15 19:44:48.177808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:113728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.987 [2024-12-15 19:44:48.177821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:42.987 [2024-12-15 19:44:48.177871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:113736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.987 [2024-12-15 19:44:48.177886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:42.987 [2024-12-15 19:44:48.177905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:113744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.987 [2024-12-15 19:44:48.177918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:42.987 [2024-12-15 19:44:48.177936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:113752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.987 [2024-12-15 19:44:48.177957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:42.987 [2024-12-15 19:44:48.177978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:113760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.987 [2024-12-15 19:44:48.177991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:42.987 [2024-12-15 19:44:48.178010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.987 [2024-12-15 19:44:48.178023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:42.987 [2024-12-15 19:44:48.178042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.987 [2024-12-15 19:44:48.178055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:42.987 [2024-12-15 19:44:48.178074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:113784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.987 [2024-12-15 19:44:48.178087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.987 [2024-12-15 19:44:48.178105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.987 [2024-12-15 19:44:48.178118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:42.987 [2024-12-15 19:44:48.178137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:113800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.987 [2024-12-15 19:44:48.178150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.987 [2024-12-15 19:44:48.178168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:113808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.987 [2024-12-15 19:44:48.178182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:42.987 [2024-12-15 19:44:48.178215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:113816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.987 [2024-12-15 19:44:48.178228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:42.987 [2024-12-15 19:44:48.178246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.987 [2024-12-15 19:44:48.178259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:42.987 [2024-12-15 19:44:48.178277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.988 [2024-12-15 19:44:48.178289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:42.988 [2024-12-15 19:44:48.178308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.988 [2024-12-15 19:44:48.178321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:42.988 [2024-12-15 19:44:48.178339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.988 [2024-12-15 19:44:48.178352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:42.988 [2024-12-15 19:44:48.178408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:113288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.988 [2024-12-15 19:44:48.178424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:42.988 [2024-12-15 19:44:48.178444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.988 [2024-12-15 19:44:48.178458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:42.988 [2024-12-15 19:44:48.178478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.988 [2024-12-15 19:44:48.178492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:42.988 [2024-12-15 19:44:48.178512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.988 [2024-12-15 19:44:48.178526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:42.988 [2024-12-15 19:44:48.178545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:113320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.988 [2024-12-15 19:44:48.178559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:42.988 [2024-12-15 19:44:48.178579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:113328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.988 [2024-12-15 19:44:48.178592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:42.988 [2024-12-15 19:44:48.178612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.988 [2024-12-15 19:44:48.178626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:42.988 [2024-12-15 19:44:48.178646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:113344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.988 [2024-12-15 19:44:48.178660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:42.988 [2024-12-15 19:44:48.178694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.988 [2024-12-15 19:44:48.178720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:42.988 [2024-12-15 19:44:48.178739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:113360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.988 [2024-12-15 19:44:48.178752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:42.988 [2024-12-15 19:44:48.178771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.988 [2024-12-15 19:44:48.178784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:42.988 [2024-12-15 19:44:48.178802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:113376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.988 [2024-12-15 19:44:48.178830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:42.988 [2024-12-15 19:44:48.178867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.988 [2024-12-15 19:44:48.178891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:42.988 [2024-12-15 19:44:48.178914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.988 [2024-12-15 19:44:48.178928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:42.988 [2024-12-15 19:44:48.178947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.988 [2024-12-15 19:44:48.178960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:42.988 [2024-12-15 19:44:48.178979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:113408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.988 [2024-12-15 19:44:48.178992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:42.988 [2024-12-15 19:44:48.179011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.988 [2024-12-15 19:44:48.179024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:42.988 [2024-12-15 19:44:48.179043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:113424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.988 [2024-12-15 19:44:48.179057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:42.988 [2024-12-15 19:44:48.179561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:113432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.988 [2024-12-15 19:44:48.179585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:42.988 [2024-12-15 19:44:48.179608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:113440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.988 [2024-12-15 19:44:48.179623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:42.988 [2024-12-15 19:44:48.179642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.988 [2024-12-15 19:44:48.179655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:42.988 [2024-12-15 19:44:48.179673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:113456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.988 [2024-12-15 19:44:48.179687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:42.988 [2024-12-15 19:44:48.179705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:113464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.988 [2024-12-15 19:44:48.179718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:42.988 [2024-12-15 19:44:48.179742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:113824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.988 [2024-12-15 19:44:48.179756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:42.988 [2024-12-15 19:44:48.179775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:113832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.988 [2024-12-15 19:44:48.179799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:42.988 [2024-12-15 19:44:48.179820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:113840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.988 [2024-12-15 19:44:48.179880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.988 [2024-12-15 19:44:48.179902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:113848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.988 [2024-12-15 19:44:48.179916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:42.988 [2024-12-15 19:44:48.179936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:113856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.988 [2024-12-15 19:44:48.179949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:42.988 [2024-12-15 19:44:48.179968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.988 [2024-12-15 19:44:48.179982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:42.988 [2024-12-15 19:44:48.180001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.988 [2024-12-15 19:44:48.180015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:42.988 [2024-12-15 19:44:48.180035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.988 [2024-12-15 19:44:48.180049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:42.989 [2024-12-15 19:44:48.180069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.989 [2024-12-15 19:44:48.180082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:42.989 [2024-12-15 19:44:48.180102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:113896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.989 [2024-12-15 19:44:48.180115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:42.989 [2024-12-15 19:44:48.180134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:113904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.989 [2024-12-15 19:44:48.180148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:42.989 [2024-12-15 19:44:48.180167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:113912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.989 [2024-12-15 19:44:48.180181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:42.989 [2024-12-15 19:44:48.180215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:113920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.989 [2024-12-15 19:44:48.180229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:42.989 [2024-12-15 19:44:48.180262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:113928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.989 [2024-12-15 19:44:48.180283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:42.989 [2024-12-15 19:44:48.180303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:113936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.989 [2024-12-15 19:44:48.180317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:42.989 [2024-12-15 19:44:48.180336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:113944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.989 [2024-12-15 19:44:48.180349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:42.989 [2024-12-15 19:44:48.180368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:113952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.989 [2024-12-15 19:44:48.180381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:42.989 [2024-12-15 19:44:48.180399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:113472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.989 [2024-12-15 19:44:48.180412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:42.989 [2024-12-15 19:44:48.180431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:113480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.989 [2024-12-15 19:44:48.180443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:42.989 [2024-12-15 19:44:48.180462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.989 [2024-12-15 19:44:48.180475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:42.989 [2024-12-15 19:44:48.180493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.989 [2024-12-15 19:44:48.180506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:42.989 [2024-12-15 19:44:48.180524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:112672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.989 [2024-12-15 19:44:48.180537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:42.989 [2024-12-15 19:44:48.180555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.989 [2024-12-15 19:44:48.180568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:42.989 [2024-12-15 19:44:48.180586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.989 [2024-12-15 19:44:48.180599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:42.989 [2024-12-15 19:44:48.180617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.989 [2024-12-15 19:44:48.180630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:42.989 [2024-12-15 19:44:48.180648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.989 [2024-12-15 19:44:48.180674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:42.989 [2024-12-15 19:44:48.180695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:112768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.989 [2024-12-15 19:44:48.180709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:42.989 [2024-12-15 19:44:48.180727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.989 [2024-12-15 19:44:48.180740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:42.989 [2024-12-15 19:44:48.180759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:113496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.989 [2024-12-15 19:44:48.180771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:42.989 [2024-12-15 19:44:48.180790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:113504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.989 [2024-12-15 19:44:48.180803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:42.989 [2024-12-15 19:44:48.180821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.989 [2024-12-15 19:44:48.180851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:42.989 [2024-12-15 19:44:48.180882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.989 [2024-12-15 19:44:48.180899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:42.989 [2024-12-15 19:44:48.180918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:113528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.989 [2024-12-15 19:44:48.180931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:42.989 [2024-12-15 19:44:48.180950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.989 [2024-12-15 19:44:48.180964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:42.989 [2024-12-15 19:44:48.180982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:113544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.989 [2024-12-15 19:44:48.180995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.989 [2024-12-15 19:44:48.181014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:112816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.989 [2024-12-15 19:44:48.181027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:42.989 [2024-12-15 19:44:48.181046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:112824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.989 [2024-12-15 19:44:48.181059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:42.989 [2024-12-15 19:44:48.181078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:112832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.989 [2024-12-15 19:44:48.181091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:42.989 [2024-12-15 19:44:48.181118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:112840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.989 [2024-12-15 19:44:48.181133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:42.989 [2024-12-15 19:44:48.181151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.989 [2024-12-15 19:44:48.181165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:42.989 [2024-12-15 19:44:48.181183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:112864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.989 [2024-12-15 19:44:48.181211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:42.989 [2024-12-15 19:44:48.181229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:112880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.989 [2024-12-15 19:44:48.181248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:42.989 [2024-12-15 19:44:48.181267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:112904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.989 [2024-12-15 19:44:48.181280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:42.989 [2024-12-15 19:44:48.181299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.989 [2024-12-15 19:44:48.181311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:42.989 [2024-12-15 19:44:48.181330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.989 [2024-12-15 19:44:48.181343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:42.989 [2024-12-15 19:44:48.181361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.989 [2024-12-15 19:44:48.181374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:42.989 [2024-12-15 19:44:48.181392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.989 [2024-12-15 19:44:48.181405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:42.989 [2024-12-15 19:44:48.181424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:112944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.990 [2024-12-15 19:44:48.181436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:42.990 [2024-12-15 19:44:48.181455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.990 [2024-12-15 19:44:48.181467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:42.990 [2024-12-15 19:44:48.181485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.990 [2024-12-15 19:44:48.181498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:42.990 [2024-12-15 19:44:48.181523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.990 [2024-12-15 19:44:48.181538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:42.990 [2024-12-15 19:44:48.181556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:113552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.990 [2024-12-15 19:44:48.181569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:42.990 [2024-12-15 19:44:48.181587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:113560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.990 [2024-12-15 19:44:48.181600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:42.990 [2024-12-15 19:44:48.181618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:113568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.990 [2024-12-15 19:44:48.181630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:42.990 [2024-12-15 19:44:48.181648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:113576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.990 [2024-12-15 19:44:48.181661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:42.990 [2024-12-15 19:44:48.181679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:113016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.990 [2024-12-15 19:44:48.181692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:42.990 [2024-12-15 19:44:48.181710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:113024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.990 [2024-12-15 19:44:48.181723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:42.990 [2024-12-15 19:44:48.181741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.990 [2024-12-15 19:44:48.181754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:42.990 [2024-12-15 19:44:48.181772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.990 [2024-12-15 19:44:48.181785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:42.990 [2024-12-15 19:44:48.181802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:113048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.990 [2024-12-15 19:44:48.181815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:42.990 [2024-12-15 19:44:48.181860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.990 [2024-12-15 19:44:48.181878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:42.990 [2024-12-15 19:44:48.181897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:113064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.990 [2024-12-15 19:44:48.181910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:42.990 [2024-12-15 19:44:48.181929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:113088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.990 [2024-12-15 19:44:48.181948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:42.990 [2024-12-15 19:44:48.181969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:113584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.990 [2024-12-15 19:44:48.181983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:42.990 [2024-12-15 19:44:48.182001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:113592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.990 [2024-12-15 19:44:48.182015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:42.990 [2024-12-15 19:44:48.182033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.990 [2024-12-15 19:44:48.182046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:42.990 [2024-12-15 19:44:48.182065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:113608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.990 [2024-12-15 19:44:48.182078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.990 [2024-12-15 19:44:48.182097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:113616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.990 [2024-12-15 19:44:48.182110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:42.990 [2024-12-15 19:44:48.182128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:113624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.990 [2024-12-15 19:44:48.182142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:42.990 [2024-12-15 19:44:48.182160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:113632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.990 [2024-12-15 19:44:48.182173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:42.990 [2024-12-15 19:44:48.182207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:113640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.990 [2024-12-15 19:44:48.182220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:42.990 [2024-12-15 19:44:48.183021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:113648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.990 [2024-12-15 19:44:48.183048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:42.990 [2024-12-15 19:44:48.183072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:113656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.990 [2024-12-15 19:44:48.183088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:42.990 [2024-12-15 19:44:48.183108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:113664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.990 [2024-12-15 19:44:48.183122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:42.990 [2024-12-15 19:44:48.183141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:113672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.990 [2024-12-15 19:44:48.183165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:42.990 [2024-12-15 19:44:48.183201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:113680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.990 [2024-12-15 19:44:48.183214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:42.990 [2024-12-15 19:44:48.183233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:113688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.990 [2024-12-15 19:44:48.183246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:42.990 [2024-12-15 19:44:48.183264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:113696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.990 [2024-12-15 19:44:48.183277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:42.990 [2024-12-15 19:44:48.183295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.991 [2024-12-15 19:44:48.183308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:42.991 [2024-12-15 19:44:48.183326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:113712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.991 [2024-12-15 19:44:48.183340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:42.991 [2024-12-15 19:44:48.183359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.991 [2024-12-15 19:44:48.183372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:42.991 [2024-12-15 19:44:48.183390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.991 [2024-12-15 19:44:48.183403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:42.991 [2024-12-15 19:44:48.183421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:113152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.991 [2024-12-15 19:44:48.183434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:42.991 [2024-12-15 19:44:48.183452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:113160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.991 [2024-12-15 19:44:48.183465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:42.991 [2024-12-15 19:44:48.183483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:113192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.991 [2024-12-15 19:44:48.183495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:42.991 [2024-12-15 19:44:48.183513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.991 [2024-12-15 19:44:48.183526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:42.991 [2024-12-15 19:44:48.183544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.991 [2024-12-15 19:44:48.183563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:42.991 [2024-12-15 19:44:48.183583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:113240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.991 [2024-12-15 19:44:48.183596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:42.991 [2024-12-15 19:44:48.183615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:113720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.991 [2024-12-15 19:44:48.183628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:42.991 [2024-12-15 19:44:48.183646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:113728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.991 [2024-12-15 19:44:48.183659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:42.991 [2024-12-15 19:44:48.183677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:113736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.991 [2024-12-15 19:44:48.183690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:42.991 [2024-12-15 19:44:48.183708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.991 [2024-12-15 19:44:48.183721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:42.991 [2024-12-15 19:44:48.183740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.991 [2024-12-15 19:44:48.183752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:42.991 [2024-12-15 19:44:48.183771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:113760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.991 [2024-12-15 19:44:48.183784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:42.991 [2024-12-15 19:44:48.183802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:113768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.991 [2024-12-15 19:44:48.183815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:42.991 [2024-12-15 19:44:48.183879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:113776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.991 [2024-12-15 19:44:48.183901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:42.991 [2024-12-15 19:44:48.183921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:113784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.991 [2024-12-15 19:44:48.183936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.991 [2024-12-15 19:44:48.183956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.991 [2024-12-15 19:44:48.183970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:42.991 [2024-12-15 19:44:48.183989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.991 [2024-12-15 19:44:48.184003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.991 [2024-12-15 19:44:48.184030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.991 [2024-12-15 19:44:48.184045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:42.991 [2024-12-15 19:44:48.184065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:113816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.991 [2024-12-15 19:44:48.184078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:42.991 [2024-12-15 19:44:48.184098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.991 [2024-12-15 19:44:48.184111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:42.991 [2024-12-15 19:44:48.184131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:113264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.991 [2024-12-15 19:44:48.184144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:42.991 [2024-12-15 19:44:48.184164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.991 [2024-12-15 19:44:48.184177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:42.991 [2024-12-15 19:44:48.184197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.991 [2024-12-15 19:44:48.184211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:42.991 [2024-12-15 19:44:48.184245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.991 [2024-12-15 19:44:48.184273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:42.991 [2024-12-15 19:44:48.184291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:113296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.991 [2024-12-15 19:44:48.184304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:42.991 [2024-12-15 19:44:48.184322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.991 [2024-12-15 19:44:48.184335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:42.991 [2024-12-15 19:44:48.184353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:113312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.991 [2024-12-15 19:44:48.184367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:42.991 [2024-12-15 19:44:48.184385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:113320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.991 [2024-12-15 19:44:48.184398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:42.991 [2024-12-15 19:44:48.184416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.991 [2024-12-15 19:44:48.184429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:42.991 [2024-12-15 19:44:48.184454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.991 [2024-12-15 19:44:48.184468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:42.991 [2024-12-15 19:44:48.184486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:113344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.991 [2024-12-15 19:44:48.184499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:42.991 [2024-12-15 19:44:48.184517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.991 [2024-12-15 19:44:48.184530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:42.991 [2024-12-15 19:44:48.184548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:113360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.991 [2024-12-15 19:44:48.184561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:42.992 [2024-12-15 19:44:48.184579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.992 [2024-12-15 19:44:48.184592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:42.992 [2024-12-15 19:44:48.184610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:113376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.992 [2024-12-15 19:44:48.184623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:42.992 [2024-12-15 19:44:48.184641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.992 [2024-12-15 19:44:48.184654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:42.992 [2024-12-15 19:44:48.184672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.992 [2024-12-15 19:44:48.184685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:42.992 [2024-12-15 19:44:48.184703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.992 [2024-12-15 19:44:48.184716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:42.992 [2024-12-15 19:44:48.184734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.992 [2024-12-15 19:44:48.184747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:42.992 [2024-12-15 19:44:48.184765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.992 [2024-12-15 19:44:48.184778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:42.992 [2024-12-15 19:44:48.185330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:113424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.992 [2024-12-15 19:44:48.185355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:42.992 [2024-12-15 19:44:48.185379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:113432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.992 [2024-12-15 19:44:48.185404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:42.992 [2024-12-15 19:44:48.185440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:113440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.992 [2024-12-15 19:44:48.185454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:42.992 [2024-12-15 19:44:48.185472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.992 [2024-12-15 19:44:48.185485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:42.992 [2024-12-15 19:44:48.185503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:113456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.992 [2024-12-15 19:44:48.185517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:42.992 [2024-12-15 19:44:48.185535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.992 [2024-12-15 19:44:48.185547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:42.992 [2024-12-15 19:44:48.185565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:113824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.992 [2024-12-15 19:44:48.185578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:42.992 [2024-12-15 19:44:48.185596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:113832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.992 [2024-12-15 19:44:48.185609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:42.992 [2024-12-15 19:44:48.185627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:113840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.992 [2024-12-15 19:44:48.185640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.992 [2024-12-15 19:44:48.185658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:113848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.992 [2024-12-15 19:44:48.185670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:42.992 [2024-12-15 19:44:48.185689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:113856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.992 [2024-12-15 19:44:48.185701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:42.992 [2024-12-15 19:44:48.185719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:113864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.992 [2024-12-15 19:44:48.185732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:42.992 [2024-12-15 19:44:48.185750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.992 [2024-12-15 19:44:48.185763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:42.992 [2024-12-15 19:44:48.185781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:113880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.992 [2024-12-15 19:44:48.185803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:42.992 [2024-12-15 19:44:48.185845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:113888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.992 [2024-12-15 19:44:48.185859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:42.992 [2024-12-15 19:44:48.185890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:113896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.992 [2024-12-15 19:44:48.185908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:42.992 [2024-12-15 19:44:48.185927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:113904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.992 [2024-12-15 19:44:48.185941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:42.992 [2024-12-15 19:44:48.185960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:113912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.992 [2024-12-15 19:44:48.185973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:42.992 [2024-12-15 19:44:48.185992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:113920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.992 [2024-12-15 19:44:48.186005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:42.992 [2024-12-15 19:44:48.186024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:113928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.992 [2024-12-15 19:44:48.186038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:42.992 [2024-12-15 19:44:48.186057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.992 [2024-12-15 19:44:48.186070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:42.992 [2024-12-15 19:44:48.186089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:113944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.992 [2024-12-15 19:44:48.186103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:42.992 [2024-12-15 19:44:48.186121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:113952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.992 [2024-12-15 19:44:48.186135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:42.992 [2024-12-15 19:44:48.186153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:113472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.992 [2024-12-15 19:44:48.186166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:42.992 [2024-12-15 19:44:48.186185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:113480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.992 [2024-12-15 19:44:48.186213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:42.992 [2024-12-15 19:44:48.186232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.992 [2024-12-15 19:44:48.186251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:42.992 [2024-12-15 19:44:48.186271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.992 [2024-12-15 19:44:48.186285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:42.992 [2024-12-15 19:44:48.186303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:112672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.992 [2024-12-15 19:44:48.186316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:42.992 [2024-12-15 19:44:48.186334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:112680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.992 [2024-12-15 19:44:48.186347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:42.992 [2024-12-15 19:44:48.186409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:112704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.992 [2024-12-15 19:44:48.186426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:42.992 [2024-12-15 19:44:48.186452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.992 [2024-12-15 19:44:48.186467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:42.992 [2024-12-15 19:44:48.186487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:112728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.992 [2024-12-15 19:44:48.186501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:42.992 [2024-12-15 19:44:48.186521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:112768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.993 [2024-12-15 19:44:48.186536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:42.993 [2024-12-15 19:44:48.186556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:113488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.993 [2024-12-15 19:44:48.186570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:42.993 [2024-12-15 19:44:48.186590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:113496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.993 [2024-12-15 19:44:48.186604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:42.993 [2024-12-15 19:44:48.186624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:113504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.993 [2024-12-15 19:44:48.186638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:42.993 [2024-12-15 19:44:48.186658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.993 [2024-12-15 19:44:48.186673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:42.993 [2024-12-15 19:44:48.186720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.993 [2024-12-15 19:44:48.186733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:42.993 [2024-12-15 19:44:48.186760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:113528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.993 [2024-12-15 19:44:48.186775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:42.993 [2024-12-15 19:44:48.186793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.993 [2024-12-15 19:44:48.186807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:42.993 [2024-12-15 19:44:48.186838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:113544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.993 [2024-12-15 19:44:48.186867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.993 [2024-12-15 19:44:48.186886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:112816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.993 [2024-12-15 19:44:48.186908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:42.993 [2024-12-15 19:44:48.186931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:112824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.993 [2024-12-15 19:44:48.186945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:42.993 [2024-12-15 19:44:48.186964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:112832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.993 [2024-12-15 19:44:48.186978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:42.993 [2024-12-15 19:44:48.186997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:112840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.993 [2024-12-15 19:44:48.187010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:42.993 [2024-12-15 19:44:48.187029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.993 [2024-12-15 19:44:48.187042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:42.993 [2024-12-15 19:44:48.187062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:112864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.993 [2024-12-15 19:44:48.187075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:42.993 [2024-12-15 19:44:48.187094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.993 [2024-12-15 19:44:48.187107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:42.993 [2024-12-15 19:44:48.187126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.993 [2024-12-15 19:44:48.187139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:42.993 [2024-12-15 19:44:48.187158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.993 [2024-12-15 19:44:48.187171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:42.993 [2024-12-15 19:44:48.187212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.993 [2024-12-15 19:44:48.187226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:42.993 [2024-12-15 19:44:48.187244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.993 [2024-12-15 19:44:48.187257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:42.993 [2024-12-15 19:44:48.187274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:112936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.993 [2024-12-15 19:44:48.187288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:42.993 [2024-12-15 19:44:48.187306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.993 [2024-12-15 19:44:48.187319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:42.993 [2024-12-15 19:44:48.187337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:112984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.993 [2024-12-15 19:44:48.187349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:42.993 [2024-12-15 19:44:48.187367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.993 [2024-12-15 19:44:48.187380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:42.993 [2024-12-15 19:44:48.187398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:113000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.993 [2024-12-15 19:44:48.187410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:42.993 [2024-12-15 19:44:48.187428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:113552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.993 [2024-12-15 19:44:48.187441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:42.993 [2024-12-15 19:44:48.187459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:113560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.993 [2024-12-15 19:44:48.187472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:42.993 [2024-12-15 19:44:48.187489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:113568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.993 [2024-12-15 19:44:48.187502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:42.993 [2024-12-15 19:44:48.187520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:113576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.993 [2024-12-15 19:44:48.187533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:42.993 [2024-12-15 19:44:48.187551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:113016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.993 [2024-12-15 19:44:48.187563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:42.993 [2024-12-15 19:44:48.187588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:113024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.993 [2024-12-15 19:44:48.187602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:42.993 [2024-12-15 19:44:48.187621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.993 [2024-12-15 19:44:48.187634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:42.993 [2024-12-15 19:44:48.187652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.993 [2024-12-15 19:44:48.187665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:42.993 [2024-12-15 19:44:48.187683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.993 [2024-12-15 19:44:48.187696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:42.993 [2024-12-15 19:44:48.187714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.993 [2024-12-15 19:44:48.187727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:42.993 [2024-12-15 19:44:48.187745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:113064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.993 [2024-12-15 19:44:48.187758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:42.993 [2024-12-15 19:44:48.187776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:113088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.993 [2024-12-15 19:44:48.187789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:42.993 [2024-12-15 19:44:48.187807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:113584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.993 [2024-12-15 19:44:48.187820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:42.993 [2024-12-15 19:44:48.187887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:113592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.993 [2024-12-15 19:44:48.187903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:42.993 [2024-12-15 19:44:48.187923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.993 [2024-12-15 19:44:48.187936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:42.993 [2024-12-15 19:44:48.187956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:113608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.994 [2024-12-15 19:44:48.187969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.994 [2024-12-15 19:44:48.187988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:113616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.994 [2024-12-15 19:44:48.188002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:42.994 [2024-12-15 19:44:48.188021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:113624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.994 [2024-12-15 19:44:48.188042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:42.994 [2024-12-15 19:44:48.188063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:113632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.994 [2024-12-15 19:44:48.188077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:42.994 [2024-12-15 19:44:48.188771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:113640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.994 [2024-12-15 19:44:48.188796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:42.994 [2024-12-15 19:44:48.188819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.994 [2024-12-15 19:44:48.188851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:42.994 [2024-12-15 19:44:48.188888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:113656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.994 [2024-12-15 19:44:48.188904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:42.994 [2024-12-15 19:44:48.188924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:113664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.994 [2024-12-15 19:44:48.188938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:42.994 [2024-12-15 19:44:48.188956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:113672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.994 [2024-12-15 19:44:48.188969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:42.994 [2024-12-15 19:44:48.188988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:113680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.994 [2024-12-15 19:44:48.189001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:42.994 [2024-12-15 19:44:48.189020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:113688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.994 [2024-12-15 19:44:48.189033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:42.994 [2024-12-15 19:44:48.189051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:113696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.994 [2024-12-15 19:44:48.189064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:42.994 [2024-12-15 19:44:48.189083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:113704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.994 [2024-12-15 19:44:48.189096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:42.994 [2024-12-15 19:44:48.189115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:113712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.994 [2024-12-15 19:44:48.189134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:42.994 [2024-12-15 19:44:48.189154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.994 [2024-12-15 19:44:48.189178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:42.994 [2024-12-15 19:44:48.189213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.994 [2024-12-15 19:44:48.189227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:42.994 [2024-12-15 19:44:48.189245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:113152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.994 [2024-12-15 19:44:48.189258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:42.994 [2024-12-15 19:44:48.189276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.994 [2024-12-15 19:44:48.189289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:42.994 [2024-12-15 19:44:48.189307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:113192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.994 [2024-12-15 19:44:48.189320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:42.994 [2024-12-15 19:44:48.189338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.994 [2024-12-15 19:44:48.189351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:42.994 [2024-12-15 19:44:48.189369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.994 [2024-12-15 19:44:48.189381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:42.994 [2024-12-15 19:44:48.189399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:113240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.994 [2024-12-15 19:44:48.189412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:42.994 [2024-12-15 19:44:48.189430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:113720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.994 [2024-12-15 19:44:48.189443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:42.994 [2024-12-15 19:44:48.189462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:113728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.994 [2024-12-15 19:44:48.189474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:42.994 [2024-12-15 19:44:48.189492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:113736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.994 [2024-12-15 19:44:48.189505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:42.994 [2024-12-15 19:44:48.189523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:113744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.994 [2024-12-15 19:44:48.189536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:42.994 [2024-12-15 19:44:48.189555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.994 [2024-12-15 19:44:48.189567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:42.994 [2024-12-15 19:44:48.189592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.994 [2024-12-15 19:44:48.189606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:42.994 [2024-12-15 19:44:48.189625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:113768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.994 [2024-12-15 19:44:48.189638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:42.994 [2024-12-15 19:44:48.189656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:113776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.994 [2024-12-15 19:44:48.189669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:42.994 [2024-12-15 19:44:48.189688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:113784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.994 [2024-12-15 19:44:48.189701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.994 [2024-12-15 19:44:48.189719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.994 [2024-12-15 19:44:48.189731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:42.994 [2024-12-15 19:44:48.189749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:113800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.994 [2024-12-15 19:44:48.189762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.994 [2024-12-15 19:44:48.189780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:113808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.994 [2024-12-15 19:44:48.189793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:42.994 [2024-12-15 19:44:48.189811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:113816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.995 [2024-12-15 19:44:48.189824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:42.995 [2024-12-15 19:44:48.189872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.995 [2024-12-15 19:44:48.189887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:42.995 [2024-12-15 19:44:48.189906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:113264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.995 [2024-12-15 19:44:48.189919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:42.995 [2024-12-15 19:44:48.189938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.995 [2024-12-15 19:44:48.189951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:42.995 [2024-12-15 19:44:48.189970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.995 [2024-12-15 19:44:48.189984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:42.995 [2024-12-15 19:44:48.190010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:113288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.995 [2024-12-15 19:44:48.190025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:42.995 [2024-12-15 19:44:48.190044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.995 [2024-12-15 19:44:48.190057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:42.995 [2024-12-15 19:44:48.190076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.995 [2024-12-15 19:44:48.190089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:42.995 [2024-12-15 19:44:48.190108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:113312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.995 [2024-12-15 19:44:48.190121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:42.995 [2024-12-15 19:44:48.190139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:113320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.995 [2024-12-15 19:44:48.190153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:42.995 [2024-12-15 19:44:48.190171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:113328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.995 [2024-12-15 19:44:48.190184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:42.995 [2024-12-15 19:44:48.190218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.995 [2024-12-15 19:44:48.190231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:42.995 [2024-12-15 19:44:48.190249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:113344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.995 [2024-12-15 19:44:48.190262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:42.995 [2024-12-15 19:44:48.190280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.995 [2024-12-15 19:44:48.190292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:42.995 [2024-12-15 19:44:48.190310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:113360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.995 [2024-12-15 19:44:48.190323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:42.995 [2024-12-15 19:44:48.190341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.995 [2024-12-15 19:44:48.190354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:42.995 [2024-12-15 19:44:48.190402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:113376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.995 [2024-12-15 19:44:48.190417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:42.995 [2024-12-15 19:44:48.190444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.995 [2024-12-15 19:44:48.190460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:42.995 [2024-12-15 19:44:48.190480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.995 [2024-12-15 19:44:48.190494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:42.995 [2024-12-15 19:44:48.190514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.995 [2024-12-15 19:44:48.190528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:42.995 [2024-12-15 19:44:48.190548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:113408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.995 [2024-12-15 19:44:48.190562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:42.995 [2024-12-15 19:44:48.191130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.995 [2024-12-15 19:44:48.191154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:42.995 [2024-12-15 19:44:48.191178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:113424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.995 [2024-12-15 19:44:48.191194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:42.995 [2024-12-15 19:44:48.191227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:113432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.995 [2024-12-15 19:44:48.191240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:42.995 [2024-12-15 19:44:48.191259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:113440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.995 [2024-12-15 19:44:48.191271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:42.995 [2024-12-15 19:44:48.191290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.995 [2024-12-15 19:44:48.191302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:42.995 [2024-12-15 19:44:48.191320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:113456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.995 [2024-12-15 19:44:48.191333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:42.995 [2024-12-15 19:44:48.191351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:113464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.995 [2024-12-15 19:44:48.191364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:42.995 [2024-12-15 19:44:48.191382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:113824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.995 [2024-12-15 19:44:48.191395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:42.995 [2024-12-15 19:44:48.191413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:113832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.995 [2024-12-15 19:44:48.191435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:42.995 [2024-12-15 19:44:48.191455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:113840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.995 [2024-12-15 19:44:48.191469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.995 [2024-12-15 19:44:48.191487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.995 [2024-12-15 19:44:48.191500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:42.995 [2024-12-15 19:44:48.191518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:113856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.995 [2024-12-15 19:44:48.191531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:42.995 [2024-12-15 19:44:48.191549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.995 [2024-12-15 19:44:48.191562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:42.995 [2024-12-15 19:44:48.191580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.995 [2024-12-15 19:44:48.191592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:42.995 [2024-12-15 19:44:48.191610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:113880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.995 [2024-12-15 19:44:48.191623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:42.995 [2024-12-15 19:44:48.191641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:113888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.995 [2024-12-15 19:44:48.191654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:42.995 [2024-12-15 19:44:48.191672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:113896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.995 [2024-12-15 19:44:48.191685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:42.995 [2024-12-15 19:44:48.191702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:113904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.995 [2024-12-15 19:44:48.191715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:42.995 [2024-12-15 19:44:48.191733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:113912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.995 [2024-12-15 19:44:48.191745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:42.996 [2024-12-15 19:44:48.191763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:113920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.996 [2024-12-15 19:44:48.191776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:42.996 [2024-12-15 19:44:48.191794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:113928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.996 [2024-12-15 19:44:48.191871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:42.996 [2024-12-15 19:44:48.191895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:113936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.996 [2024-12-15 19:44:48.191910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:42.996 [2024-12-15 19:44:48.191929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:113944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.996 [2024-12-15 19:44:48.191942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:42.996 [2024-12-15 19:44:48.191962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:113952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.996 [2024-12-15 19:44:48.191975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:42.996 [2024-12-15 19:44:48.191997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:113472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.996 [2024-12-15 19:44:48.192010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:42.996 [2024-12-15 19:44:48.192029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:113480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.996 [2024-12-15 19:44:48.192043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:42.996 [2024-12-15 19:44:48.192062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.996 [2024-12-15 19:44:48.192075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:42.996 [2024-12-15 19:44:48.192094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.996 [2024-12-15 19:44:48.192108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:42.996 [2024-12-15 19:44:48.192127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.996 [2024-12-15 19:44:48.192141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:42.996 [2024-12-15 19:44:48.192160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:112680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.996 [2024-12-15 19:44:48.192173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:42.996 [2024-12-15 19:44:48.192192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.996 [2024-12-15 19:44:48.192205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:42.996 [2024-12-15 19:44:48.192254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.996 [2024-12-15 19:44:48.192267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:42.996 [2024-12-15 19:44:48.192285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:112728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.996 [2024-12-15 19:44:48.192297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:42.996 [2024-12-15 19:44:48.192329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:112768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.996 [2024-12-15 19:44:48.192343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:42.996 [2024-12-15 19:44:48.192362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:113488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.996 [2024-12-15 19:44:48.192375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:42.996 [2024-12-15 19:44:48.192393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:113496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.996 [2024-12-15 19:44:48.192405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:42.996 [2024-12-15 19:44:48.192423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.996 [2024-12-15 19:44:48.192437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:42.996 [2024-12-15 19:44:48.192455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.996 [2024-12-15 19:44:48.192484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:42.996 [2024-12-15 19:44:48.192502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.996 [2024-12-15 19:44:48.192515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:42.996 [2024-12-15 19:44:48.192533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:113528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.996 [2024-12-15 19:44:48.192563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:42.996 [2024-12-15 19:44:48.192615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.996 [2024-12-15 19:44:48.192629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:42.996 [2024-12-15 19:44:48.192650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:113544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.996 [2024-12-15 19:44:48.192665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.996 [2024-12-15 19:44:48.192686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:112816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.996 [2024-12-15 19:44:48.192701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:42.996 [2024-12-15 19:44:48.192728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:112824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.996 [2024-12-15 19:44:48.192744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:42.996 [2024-12-15 19:44:48.192765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:112832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.996 [2024-12-15 19:44:48.192779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:42.996 [2024-12-15 19:44:48.192825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:112840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.996 [2024-12-15 19:44:48.192856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:42.996 [2024-12-15 19:44:48.192879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.996 [2024-12-15 19:44:48.192895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:42.996 [2024-12-15 19:44:48.192916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:112864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.996 [2024-12-15 19:44:48.192931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:42.996 [2024-12-15 19:44:48.192952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:112880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.996 [2024-12-15 19:44:48.192967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:42.996 [2024-12-15 19:44:48.192988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:112904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.996 [2024-12-15 19:44:48.193003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:42.996 [2024-12-15 19:44:48.193024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.996 [2024-12-15 19:44:48.193040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:42.996 [2024-12-15 19:44:48.193061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.996 [2024-12-15 19:44:48.193076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:42.996 [2024-12-15 19:44:48.193097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.996 [2024-12-15 19:44:48.193111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:42.996 [2024-12-15 19:44:48.193132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.996 [2024-12-15 19:44:48.193147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:42.996 [2024-12-15 19:44:48.193177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:112944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.996 [2024-12-15 19:44:48.193192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:42.996 [2024-12-15 19:44:48.193242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.996 [2024-12-15 19:44:48.193255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:42.996 [2024-12-15 19:44:48.193274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.996 [2024-12-15 19:44:48.193288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:42.996 [2024-12-15 19:44:48.193337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:113000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.996 [2024-12-15 19:44:48.193354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:42.996 [2024-12-15 19:44:48.193405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:113552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.996 [2024-12-15 19:44:48.193435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:42.997 [2024-12-15 19:44:48.193455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:113560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.997 [2024-12-15 19:44:48.193469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:42.997 [2024-12-15 19:44:48.193504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:113568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.997 [2024-12-15 19:44:48.193519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:42.997 [2024-12-15 19:44:48.193538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:113576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.997 [2024-12-15 19:44:48.193552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:42.997 [2024-12-15 19:44:48.193572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:113016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.997 [2024-12-15 19:44:48.193586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:42.997 [2024-12-15 19:44:48.193607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:113024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.997 [2024-12-15 19:44:48.193620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:42.997 [2024-12-15 19:44:48.193674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.997 [2024-12-15 19:44:48.193689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:42.997 [2024-12-15 19:44:48.193710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.997 [2024-12-15 19:44:48.193725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:42.997 [2024-12-15 19:44:48.193761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:113048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.997 [2024-12-15 19:44:48.193775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:42.997 [2024-12-15 19:44:48.193796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.997 [2024-12-15 19:44:48.193810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:42.997 [2024-12-15 19:44:48.193847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:113064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.997 [2024-12-15 19:44:48.193877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:42.997 [2024-12-15 19:44:48.193897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:113088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.997 [2024-12-15 19:44:48.193921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:42.997 [2024-12-15 19:44:48.193943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.997 [2024-12-15 19:44:48.193970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:42.997 [2024-12-15 19:44:48.194008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:113592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.997 [2024-12-15 19:44:48.194022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:42.997 [2024-12-15 19:44:48.194042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.997 [2024-12-15 19:44:48.194056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:42.997 [2024-12-15 19:44:48.194076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:113608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.997 [2024-12-15 19:44:48.194090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.997 [2024-12-15 19:44:48.194110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:113616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.997 [2024-12-15 19:44:48.200964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:42.997 [2024-12-15 19:44:48.201034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:113624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.997 [2024-12-15 19:44:48.201056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:42.997 [2024-12-15 19:44:48.201404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:113632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.997 [2024-12-15 19:44:48.201434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:42.997 [2024-12-15 19:44:48.201499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:113640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.997 [2024-12-15 19:44:48.201520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:42.997 [2024-12-15 19:44:48.201549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:113648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.997 [2024-12-15 19:44:48.201566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:42.997 [2024-12-15 19:44:48.201593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:113656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.997 [2024-12-15 19:44:48.201608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:42.997 [2024-12-15 19:44:48.201634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:113664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.997 [2024-12-15 19:44:48.201649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:42.997 [2024-12-15 19:44:48.201676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:113672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.997 [2024-12-15 19:44:48.201707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:42.997 [2024-12-15 19:44:48.201737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:113680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.997 [2024-12-15 19:44:48.201753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:42.997 [2024-12-15 19:44:48.201780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.997 [2024-12-15 19:44:48.201795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:42.997 [2024-12-15 19:44:48.201838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:113696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.997 [2024-12-15 19:44:48.201867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:42.997 [2024-12-15 19:44:48.201922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:113704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.997 [2024-12-15 19:44:48.201938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:42.997 [2024-12-15 19:44:48.201962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:113712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.997 [2024-12-15 19:44:48.201977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:42.997 [2024-12-15 19:44:48.202001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.997 [2024-12-15 19:44:48.202015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:42.997 [2024-12-15 19:44:48.202039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.997 [2024-12-15 19:44:48.202053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:42.997 [2024-12-15 19:44:48.202077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:113152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.997 [2024-12-15 19:44:48.202091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:42.997 [2024-12-15 19:44:48.202116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.997 [2024-12-15 19:44:48.202129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:42.997 [2024-12-15 19:44:48.202153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.997 [2024-12-15 19:44:48.202167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:42.997 [2024-12-15 19:44:48.202191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.997 [2024-12-15 19:44:48.202221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:42.997 [2024-12-15 19:44:48.202246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.997 [2024-12-15 19:44:48.202260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:42.997 [2024-12-15 19:44:48.202311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:113240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.997 [2024-12-15 19:44:48.202327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:42.997 [2024-12-15 19:44:48.202383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:113720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.997 [2024-12-15 19:44:48.202402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:42.997 [2024-12-15 19:44:48.202429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:113728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.997 [2024-12-15 19:44:48.202444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:42.997 [2024-12-15 19:44:48.202471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.997 [2024-12-15 19:44:48.202486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:42.997 [2024-12-15 19:44:48.202513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:113744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.998 [2024-12-15 19:44:48.202528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:42.998 [2024-12-15 19:44:48.202555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:113752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.998 [2024-12-15 19:44:48.202570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:42.998 [2024-12-15 19:44:48.202597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:113760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.998 [2024-12-15 19:44:48.202611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:42.998 [2024-12-15 19:44:48.202638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:113768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.998 [2024-12-15 19:44:48.202653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:42.998 [2024-12-15 19:44:48.202679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:113776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.998 [2024-12-15 19:44:48.202694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:42.998 [2024-12-15 19:44:48.202721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.998 [2024-12-15 19:44:48.202736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.998 [2024-12-15 19:44:48.202762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.998 [2024-12-15 19:44:48.202777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:42.998 [2024-12-15 19:44:48.202804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:113800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.998 [2024-12-15 19:44:48.202832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.998 [2024-12-15 19:44:48.202871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:113808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.998 [2024-12-15 19:44:48.202887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:42.998 [2024-12-15 19:44:48.202914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:113816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.998 [2024-12-15 19:44:48.202929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:42.998 [2024-12-15 19:44:48.202956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.998 [2024-12-15 19:44:48.202971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:42.998 [2024-12-15 19:44:48.202998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:113264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.998 [2024-12-15 19:44:48.203013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:42.998 [2024-12-15 19:44:48.203039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.998 [2024-12-15 19:44:48.203054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:42.998 [2024-12-15 19:44:48.203081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.998 [2024-12-15 19:44:48.203096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:42.998 [2024-12-15 19:44:48.203122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:113288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.998 [2024-12-15 19:44:48.203137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:42.998 [2024-12-15 19:44:48.203164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:113296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.998 [2024-12-15 19:44:48.203198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:42.998 [2024-12-15 19:44:48.203254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.998 [2024-12-15 19:44:48.203269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:42.998 [2024-12-15 19:44:48.203295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.998 [2024-12-15 19:44:48.203310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:42.998 [2024-12-15 19:44:48.203351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:113320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.998 [2024-12-15 19:44:48.203366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:42.998 [2024-12-15 19:44:48.203391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:113328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.998 [2024-12-15 19:44:48.203406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:42.998 [2024-12-15 19:44:48.203439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.998 [2024-12-15 19:44:48.203455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:42.998 [2024-12-15 19:44:48.203481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:113344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.998 [2024-12-15 19:44:48.203496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:42.998 [2024-12-15 19:44:48.203537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.998 [2024-12-15 19:44:48.203567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:42.998 [2024-12-15 19:44:48.203592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:113360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.998 [2024-12-15 19:44:48.203607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:42.998 [2024-12-15 19:44:48.203632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.998 [2024-12-15 19:44:48.203647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:42.998 [2024-12-15 19:44:48.203672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:113376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.998 [2024-12-15 19:44:48.203687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:42.998 [2024-12-15 19:44:48.203712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.998 [2024-12-15 19:44:48.203726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:42.998 [2024-12-15 19:44:48.203752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.998 [2024-12-15 19:44:48.203766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:42.998 [2024-12-15 19:44:48.203792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.998 [2024-12-15 19:44:48.203807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:42.998 [2024-12-15 19:44:48.203982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:113408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.998 [2024-12-15 19:44:48.204016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:42.998 [2024-12-15 19:44:55.249009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:92384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.998 [2024-12-15 19:44:55.249078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:42.998 [2024-12-15 19:44:55.249137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:92392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.998 [2024-12-15 19:44:55.249159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:42.998 [2024-12-15 19:44:55.249211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:92400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.998 [2024-12-15 19:44:55.249260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:42.998 [2024-12-15 19:44:55.249297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:92408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.998 [2024-12-15 19:44:55.249312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:42.998 [2024-12-15 19:44:55.249346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:92416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.998 [2024-12-15 19:44:55.249360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:42.998 [2024-12-15 19:44:55.249380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:92424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.998 [2024-12-15 19:44:55.249394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:42.998 [2024-12-15 19:44:55.249413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:92432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.998 [2024-12-15 19:44:55.249427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:42.998 [2024-12-15 19:44:55.249446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:92440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.998 [2024-12-15 19:44:55.249460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:42.998 [2024-12-15 19:44:55.249479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.998 [2024-12-15 19:44:55.249493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:42.998 [2024-12-15 19:44:55.249512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:92456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.998 [2024-12-15 19:44:55.249525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:42.999 [2024-12-15 19:44:55.249549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:92464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.999 [2024-12-15 19:44:55.249562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:42.999 [2024-12-15 19:44:55.249581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:92472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.999 [2024-12-15 19:44:55.249595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:42.999 [2024-12-15 19:44:55.249614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:92480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.999 [2024-12-15 19:44:55.249628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.999 [2024-12-15 19:44:55.249648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:92488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.999 [2024-12-15 19:44:55.249662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:42.999 [2024-12-15 19:44:55.249681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.999 [2024-12-15 19:44:55.249703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:42.999 [2024-12-15 19:44:55.249725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:92504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.999 [2024-12-15 19:44:55.249740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:42.999 [2024-12-15 19:44:55.249761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:92512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.999 [2024-12-15 19:44:55.249775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:42.999 [2024-12-15 19:44:55.249796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.999 [2024-12-15 19:44:55.249844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:42.999 [2024-12-15 19:44:55.250534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:92528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.999 [2024-12-15 19:44:55.250559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:42.999 [2024-12-15 19:44:55.250584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:92536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.999 [2024-12-15 19:44:55.250600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:42.999 [2024-12-15 19:44:55.250623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:92544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.999 [2024-12-15 19:44:55.250639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:42.999 [2024-12-15 19:44:55.250662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:92552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.999 [2024-12-15 19:44:55.250678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:42.999 [2024-12-15 19:44:55.250701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.999 [2024-12-15 19:44:55.250731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:42.999 [2024-12-15 19:44:55.250768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:92000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.999 [2024-12-15 19:44:55.250782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:42.999 [2024-12-15 19:44:55.250804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:92008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.999 [2024-12-15 19:44:55.250818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:42.999 [2024-12-15 19:44:55.250852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:92016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.999 [2024-12-15 19:44:55.250881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:42.999 [2024-12-15 19:44:55.250906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:92024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.999 [2024-12-15 19:44:55.250921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:42.999 [2024-12-15 19:44:55.250955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:92048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.999 [2024-12-15 19:44:55.250972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:42.999 [2024-12-15 19:44:55.250995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:92056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.999 [2024-12-15 19:44:55.251010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:42.999 [2024-12-15 19:44:55.251033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.999 [2024-12-15 19:44:55.251048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:42.999 [2024-12-15 19:44:55.251071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:92560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.999 [2024-12-15 19:44:55.251086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:42.999 [2024-12-15 19:44:55.251109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:92568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.999 [2024-12-15 19:44:55.251124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:42.999 [2024-12-15 19:44:55.251146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:92576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.999 [2024-12-15 19:44:55.251161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:42.999 [2024-12-15 19:44:55.251185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:92584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.999 [2024-12-15 19:44:55.251200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:42.999 [2024-12-15 19:44:55.251415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:92592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.999 [2024-12-15 19:44:55.251443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:42.999 [2024-12-15 19:44:55.251488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:92600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.999 [2024-12-15 19:44:55.251505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:42.999 [2024-12-15 19:44:55.251530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:92608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.999 [2024-12-15 19:44:55.251545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:42.999 [2024-12-15 19:44:55.251569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:92616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.999 [2024-12-15 19:44:55.251585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:42.999 [2024-12-15 19:44:55.251609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:92624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.999 [2024-12-15 19:44:55.251624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:42.999 [2024-12-15 19:44:55.251660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:92632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.999 [2024-12-15 19:44:55.251676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:42.999 [2024-12-15 19:44:55.251701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:92640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.999 [2024-12-15 19:44:55.251715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:42.999 [2024-12-15 19:44:55.251739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:92648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.999 [2024-12-15 19:44:55.251754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:42.999 [2024-12-15 19:44:55.251778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:92656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.999 [2024-12-15 19:44:55.251792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.999 [2024-12-15 19:44:55.251816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:92664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.999 [2024-12-15 19:44:55.251863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:42.999 [2024-12-15 19:44:55.251908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:92672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.999 [2024-12-15 19:44:55.251937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.999 [2024-12-15 19:44:55.251965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:92680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.000 [2024-12-15 19:44:55.251981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:43.000 [2024-12-15 19:44:55.252007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:92688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.000 [2024-12-15 19:44:55.252023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:43.000 [2024-12-15 19:44:55.252049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:92696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.000 [2024-12-15 19:44:55.252065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:43.000 [2024-12-15 19:44:55.252090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:92704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.000 [2024-12-15 19:44:55.252106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:43.000 [2024-12-15 19:44:55.252132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:92712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.000 [2024-12-15 19:44:55.252148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:43.000 [2024-12-15 19:44:55.252173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:92720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.000 [2024-12-15 19:44:55.252189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:43.000 [2024-12-15 19:44:55.252215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:92728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.000 [2024-12-15 19:44:55.252241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:43.000 [2024-12-15 19:44:55.252269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:92736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.000 [2024-12-15 19:44:55.252285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:43.000 [2024-12-15 19:44:55.252311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:92744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.000 [2024-12-15 19:44:55.252326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:43.000 [2024-12-15 19:44:55.252367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:92752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.000 [2024-12-15 19:44:55.252382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:43.000 [2024-12-15 19:44:55.252408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:92760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.000 [2024-12-15 19:44:55.252437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:43.000 [2024-12-15 19:44:55.252477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:92768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.000 [2024-12-15 19:44:55.252492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:43.000 [2024-12-15 19:44:55.252517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:92072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.000 [2024-12-15 19:44:55.252532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:43.000 [2024-12-15 19:44:55.252557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:92088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.000 [2024-12-15 19:44:55.252572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:43.000 [2024-12-15 19:44:55.252597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.000 [2024-12-15 19:44:55.252612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:43.000 [2024-12-15 19:44:55.252637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:92112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.000 [2024-12-15 19:44:55.252652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:43.000 [2024-12-15 19:44:55.252693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:92128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.000 [2024-12-15 19:44:55.252708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:43.000 [2024-12-15 19:44:55.252733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:92144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.000 [2024-12-15 19:44:55.252748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:43.000 [2024-12-15 19:44:55.252774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:92152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.000 [2024-12-15 19:44:55.252811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:43.000 [2024-12-15 19:44:55.252854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:92160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.000 [2024-12-15 19:44:55.252870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:43.000 [2024-12-15 19:44:55.252897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:92176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.000 [2024-12-15 19:44:55.252932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:43.000 [2024-12-15 19:44:55.252961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:92192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.000 [2024-12-15 19:44:55.252977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:43.000 [2024-12-15 19:44:55.253003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:92200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.000 [2024-12-15 19:44:55.253018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:43.000 [2024-12-15 19:44:55.253044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:92208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.000 [2024-12-15 19:44:55.253058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:43.000 [2024-12-15 19:44:55.253084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:92240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.000 [2024-12-15 19:44:55.253099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:43.000 [2024-12-15 19:44:55.253132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:92248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.000 [2024-12-15 19:44:55.253153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:43.000 [2024-12-15 19:44:55.253180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.000 [2024-12-15 19:44:55.253195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:43.000 [2024-12-15 19:44:55.253235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:92264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.000 [2024-12-15 19:44:55.253250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:43.000 [2024-12-15 19:44:55.253275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:92776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.000 [2024-12-15 19:44:55.253289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:43.000 [2024-12-15 19:44:55.253314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:92784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.000 [2024-12-15 19:44:55.253329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:43.000 [2024-12-15 19:44:55.253367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:92792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.000 [2024-12-15 19:44:55.253389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:43.000 [2024-12-15 19:44:55.253414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:92800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.000 [2024-12-15 19:44:55.253429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:43.000 [2024-12-15 19:44:55.253453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.000 [2024-12-15 19:44:55.253469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:43.000 [2024-12-15 19:44:55.253493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:92280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.000 [2024-12-15 19:44:55.253507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:43.000 [2024-12-15 19:44:55.253531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:92304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.000 [2024-12-15 19:44:55.253546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:43.000 [2024-12-15 19:44:55.253570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.000 [2024-12-15 19:44:55.253584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:43.000 [2024-12-15 19:44:55.253623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:92328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.000 [2024-12-15 19:44:55.253638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:43.000 [2024-12-15 19:44:55.253663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:92344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.000 [2024-12-15 19:44:55.253677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:43.000 [2024-12-15 19:44:55.253718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:92360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.000 [2024-12-15 19:44:55.253733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:43.000 [2024-12-15 19:44:55.253758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:92368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.000 [2024-12-15 19:44:55.253773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:43.001 [2024-12-15 19:44:55.253799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:92376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.001 [2024-12-15 19:44:55.253814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:43.001 [2024-12-15 19:44:55.253840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:92816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.001 [2024-12-15 19:44:55.253860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:43.001 [2024-12-15 19:44:55.253898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:92824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.001 [2024-12-15 19:44:55.253917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:43.001 [2024-12-15 19:44:55.253953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:92832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.001 [2024-12-15 19:44:55.253969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:43.001 [2024-12-15 19:44:55.253995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:92840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.001 [2024-12-15 19:44:55.254010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:43.001 [2024-12-15 19:44:55.254050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:92848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.001 [2024-12-15 19:44:55.254064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:43.001 [2024-12-15 19:44:55.254090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:92856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.001 [2024-12-15 19:44:55.254105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:43.001 [2024-12-15 19:44:55.254145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:92864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.001 [2024-12-15 19:44:55.254160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:43.001 [2024-12-15 19:44:55.254200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:92872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.001 [2024-12-15 19:44:55.254218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:43.001 [2024-12-15 19:44:55.254243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:92880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.001 [2024-12-15 19:44:55.254258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:43.001 [2024-12-15 19:44:55.254282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:92888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.001 [2024-12-15 19:44:55.254297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:43.001 [2024-12-15 19:44:55.254321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:92896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.001 [2024-12-15 19:44:55.254336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:43.001 [2024-12-15 19:44:55.254386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:92904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.001 [2024-12-15 19:44:55.254404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:43.001 [2024-12-15 19:44:55.254430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:92912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.001 [2024-12-15 19:44:55.254446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:43.001 [2024-12-15 19:44:55.254472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:92920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.001 [2024-12-15 19:44:55.254486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:43.001 [2024-12-15 19:44:55.254519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:92928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.001 [2024-12-15 19:44:55.254536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:43.001 [2024-12-15 19:44:55.254562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:92936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.001 [2024-12-15 19:44:55.254578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:43.001 [2024-12-15 19:44:55.254750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:92944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.001 [2024-12-15 19:44:55.254774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:43.001 [2024-12-15 19:44:55.254808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:92952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.001 [2024-12-15 19:44:55.254825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:43.001 [2024-12-15 19:44:55.254882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:92960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.001 [2024-12-15 19:44:55.254899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:43.001 [2024-12-15 19:44:55.254930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:92968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.001 [2024-12-15 19:44:55.254946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:43.001 [2024-12-15 19:44:55.254977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.001 [2024-12-15 19:44:55.254992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:43.001 [2024-12-15 19:45:08.727870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:46600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.001 [2024-12-15 19:45:08.727943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.001 [2024-12-15 19:45:08.727975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.001 [2024-12-15 19:45:08.727992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.001 [2024-12-15 19:45:08.728007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:45952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.001 [2024-12-15 19:45:08.728019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.001 [2024-12-15 19:45:08.728032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:45968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.001 [2024-12-15 19:45:08.728045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.001 [2024-12-15 19:45:08.728059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:45976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.001 [2024-12-15 19:45:08.728071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.001 [2024-12-15 19:45:08.728085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:45984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.001 [2024-12-15 19:45:08.728121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.001 [2024-12-15 19:45:08.728136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:46016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.001 [2024-12-15 19:45:08.728148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.001 [2024-12-15 19:45:08.728178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:46040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.001 [2024-12-15 19:45:08.728189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.001 [2024-12-15 19:45:08.728202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:46048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.001 [2024-12-15 19:45:08.728213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.001 [2024-12-15 19:45:08.728225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:46056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.001 [2024-12-15 19:45:08.728237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.001 [2024-12-15 19:45:08.728249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.001 [2024-12-15 19:45:08.728261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.001 [2024-12-15 19:45:08.728273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:46640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.001 [2024-12-15 19:45:08.728285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.001 [2024-12-15 19:45:08.728302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:46656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.001 [2024-12-15 19:45:08.728314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.001 [2024-12-15 19:45:08.728327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:46664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.001 [2024-12-15 19:45:08.728338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.001 [2024-12-15 19:45:08.728351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:46696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.001 [2024-12-15 19:45:08.728363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.001 [2024-12-15 19:45:08.728385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:46704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.001 [2024-12-15 19:45:08.728396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.001 [2024-12-15 19:45:08.728410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:46712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.001 [2024-12-15 19:45:08.728423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.001 [2024-12-15 19:45:08.728436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:46072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.001 [2024-12-15 19:45:08.728448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.002 [2024-12-15 19:45:08.728472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:46080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.002 [2024-12-15 19:45:08.728486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.002 [2024-12-15 19:45:08.728499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:46096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.002 [2024-12-15 19:45:08.728511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.002 [2024-12-15 19:45:08.728524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:46112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.002 [2024-12-15 19:45:08.728535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.002 [2024-12-15 19:45:08.728548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:46120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.002 [2024-12-15 19:45:08.728560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.002 [2024-12-15 19:45:08.728572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:46128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.002 [2024-12-15 19:45:08.728584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.002 [2024-12-15 19:45:08.728596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:46136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.002 [2024-12-15 19:45:08.728608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.002 [2024-12-15 19:45:08.728621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:46144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.002 [2024-12-15 19:45:08.728633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.002 [2024-12-15 19:45:08.728646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:46720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.002 [2024-12-15 19:45:08.728657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.002 [2024-12-15 19:45:08.728670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.002 [2024-12-15 19:45:08.728682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.002 [2024-12-15 19:45:08.728695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:46744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.002 [2024-12-15 19:45:08.728707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.002 [2024-12-15 19:45:08.728720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:46752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.002 [2024-12-15 19:45:08.728744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.002 [2024-12-15 19:45:08.728756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:46760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.002 [2024-12-15 19:45:08.728768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.002 [2024-12-15 19:45:08.728781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:46768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.002 [2024-12-15 19:45:08.728792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.002 [2024-12-15 19:45:08.728811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:46168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.002 [2024-12-15 19:45:08.728840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.002 [2024-12-15 19:45:08.728877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:46176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.002 [2024-12-15 19:45:08.728892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.002 [2024-12-15 19:45:08.728906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:46184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.002 [2024-12-15 19:45:08.728918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.002 [2024-12-15 19:45:08.728932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:46216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.002 [2024-12-15 19:45:08.728943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.002 [2024-12-15 19:45:08.728957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:46232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.002 [2024-12-15 19:45:08.728969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.002 [2024-12-15 19:45:08.728982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:46272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.002 [2024-12-15 19:45:08.728994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.002 [2024-12-15 19:45:08.729007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:46304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.002 [2024-12-15 19:45:08.729019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.002 [2024-12-15 19:45:08.729034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:46312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.002 [2024-12-15 19:45:08.729046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.002 [2024-12-15 19:45:08.729059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:46776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.002 [2024-12-15 19:45:08.729071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.002 [2024-12-15 19:45:08.729084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:46784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.002 [2024-12-15 19:45:08.729096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.002 [2024-12-15 19:45:08.729109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:46792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.002 [2024-12-15 19:45:08.729121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.002 [2024-12-15 19:45:08.729134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:46800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.002 [2024-12-15 19:45:08.729146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.002 [2024-12-15 19:45:08.729159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:46808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.002 [2024-12-15 19:45:08.729179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.002 [2024-12-15 19:45:08.729193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:46816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.002 [2024-12-15 19:45:08.729220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.002 [2024-12-15 19:45:08.729234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.002 [2024-12-15 19:45:08.729256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.002 [2024-12-15 19:45:08.729270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:46832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.002 [2024-12-15 19:45:08.729281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.002 [2024-12-15 19:45:08.729294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:46840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.002 [2024-12-15 19:45:08.729306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.002 [2024-12-15 19:45:08.729321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.002 [2024-12-15 19:45:08.729333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.002 [2024-12-15 19:45:08.729345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:46856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.002 [2024-12-15 19:45:08.729358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.002 [2024-12-15 19:45:08.729371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:46864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.002 [2024-12-15 19:45:08.729382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.002 [2024-12-15 19:45:08.729396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:46872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.002 [2024-12-15 19:45:08.729408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.002 [2024-12-15 19:45:08.729420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:46880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.002 [2024-12-15 19:45:08.729432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.002 [2024-12-15 19:45:08.729445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:46888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.002 [2024-12-15 19:45:08.729457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.002 [2024-12-15 19:45:08.729470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:46896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.002 [2024-12-15 19:45:08.729482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.002 [2024-12-15 19:45:08.729495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:46904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.002 [2024-12-15 19:45:08.729506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.002 [2024-12-15 19:45:08.729524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:46912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.002 [2024-12-15 19:45:08.729555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.002 [2024-12-15 19:45:08.729569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:46920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.002 [2024-12-15 19:45:08.729581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.003 [2024-12-15 19:45:08.729594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:46928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.003 [2024-12-15 19:45:08.729605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.003 [2024-12-15 19:45:08.729618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:46936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.003 [2024-12-15 19:45:08.729630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.003 [2024-12-15 19:45:08.729642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:46944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.003 [2024-12-15 19:45:08.729654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.003 [2024-12-15 19:45:08.729667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.003 [2024-12-15 19:45:08.729678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.003 [2024-12-15 19:45:08.729691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:46960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.003 [2024-12-15 19:45:08.729704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.003 [2024-12-15 19:45:08.729717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:46968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.003 [2024-12-15 19:45:08.729740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.003 [2024-12-15 19:45:08.729753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:46976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.003 [2024-12-15 19:45:08.729765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.003 [2024-12-15 19:45:08.729778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.003 [2024-12-15 19:45:08.729790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.003 [2024-12-15 19:45:08.729804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:46992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.003 [2024-12-15 19:45:08.729841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.003 [2024-12-15 19:45:08.729860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:47000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.003 [2024-12-15 19:45:08.729872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.003 [2024-12-15 19:45:08.729885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:47008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.003 [2024-12-15 19:45:08.729904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.003 [2024-12-15 19:45:08.729918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:47016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.003 [2024-12-15 19:45:08.729939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.003 [2024-12-15 19:45:08.729953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:46320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.003 [2024-12-15 19:45:08.729966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.003 [2024-12-15 19:45:08.729980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:46352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.003 [2024-12-15 19:45:08.729992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.003 [2024-12-15 19:45:08.730006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:46360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.003 [2024-12-15 19:45:08.730019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.003 [2024-12-15 19:45:08.730032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:46368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.003 [2024-12-15 19:45:08.730045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.003 [2024-12-15 19:45:08.730059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:46376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.003 [2024-12-15 19:45:08.730072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.003 [2024-12-15 19:45:08.730085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:46392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.003 [2024-12-15 19:45:08.730098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.003 [2024-12-15 19:45:08.730111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:46400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.003 [2024-12-15 19:45:08.730123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.003 [2024-12-15 19:45:08.730136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:46432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.003 [2024-12-15 19:45:08.730164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.003 [2024-12-15 19:45:08.730177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:46440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.003 [2024-12-15 19:45:08.730189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.003 [2024-12-15 19:45:08.730202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:46448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.003 [2024-12-15 19:45:08.730223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.003 [2024-12-15 19:45:08.730237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:46464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.003 [2024-12-15 19:45:08.730253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.003 [2024-12-15 19:45:08.730271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:46472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.003 [2024-12-15 19:45:08.730285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.003 [2024-12-15 19:45:08.730298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:46488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.003 [2024-12-15 19:45:08.730310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.003 [2024-12-15 19:45:08.730323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:46496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.003 [2024-12-15 19:45:08.730335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.003 [2024-12-15 19:45:08.730348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:46520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.003 [2024-12-15 19:45:08.730360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.003 [2024-12-15 19:45:08.730400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:46560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.003 [2024-12-15 19:45:08.730414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.003 [2024-12-15 19:45:08.730428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:47024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.003 [2024-12-15 19:45:08.730440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.003 [2024-12-15 19:45:08.730454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:47032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.003 [2024-12-15 19:45:08.730466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.003 [2024-12-15 19:45:08.730479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:47040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.003 [2024-12-15 19:45:08.730498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.003 [2024-12-15 19:45:08.730511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:47048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.003 [2024-12-15 19:45:08.730523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.003 [2024-12-15 19:45:08.730537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:47056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.003 [2024-12-15 19:45:08.730549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.003 [2024-12-15 19:45:08.730562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:47064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.003 [2024-12-15 19:45:08.730575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.003 [2024-12-15 19:45:08.730589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:47072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.003 [2024-12-15 19:45:08.730601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.003 [2024-12-15 19:45:08.730614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:47080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.004 [2024-12-15 19:45:08.730626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.004 [2024-12-15 19:45:08.730645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:47088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.004 [2024-12-15 19:45:08.730658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.004 [2024-12-15 19:45:08.730689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:47096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.004 [2024-12-15 19:45:08.730701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.004 [2024-12-15 19:45:08.730714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:47104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.004 [2024-12-15 19:45:08.730726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.004 [2024-12-15 19:45:08.730739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.004 [2024-12-15 19:45:08.730751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.004 [2024-12-15 19:45:08.730764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:47120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.004 [2024-12-15 19:45:08.730775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.004 [2024-12-15 19:45:08.730788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:47128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.004 [2024-12-15 19:45:08.730800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.004 [2024-12-15 19:45:08.730814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:47136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.004 [2024-12-15 19:45:08.730841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.004 [2024-12-15 19:45:08.730854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:47144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.004 [2024-12-15 19:45:08.730877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.004 [2024-12-15 19:45:08.730893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:47152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.004 [2024-12-15 19:45:08.730906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.004 [2024-12-15 19:45:08.730919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:47160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.004 [2024-12-15 19:45:08.730931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.004 [2024-12-15 19:45:08.730945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:47168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.004 [2024-12-15 19:45:08.730957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.004 [2024-12-15 19:45:08.730970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:47176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.004 [2024-12-15 19:45:08.730982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.004 [2024-12-15 19:45:08.730995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:47184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.004 [2024-12-15 19:45:08.731014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.004 [2024-12-15 19:45:08.731029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:47192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.004 [2024-12-15 19:45:08.731041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.004 [2024-12-15 19:45:08.731054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:47200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.004 [2024-12-15 19:45:08.731067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.004 [2024-12-15 19:45:08.731081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:47208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.004 [2024-12-15 19:45:08.731093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.004 [2024-12-15 19:45:08.731107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:47216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.004 [2024-12-15 19:45:08.731119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.004 [2024-12-15 19:45:08.731132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:47224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.004 [2024-12-15 19:45:08.731145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.004 [2024-12-15 19:45:08.731158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:47232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.004 [2024-12-15 19:45:08.731186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.004 [2024-12-15 19:45:08.731200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:47240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.004 [2024-12-15 19:45:08.731222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.004 [2024-12-15 19:45:08.731235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:47248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.004 [2024-12-15 19:45:08.731246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.004 [2024-12-15 19:45:08.731259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:47256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.004 [2024-12-15 19:45:08.731270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.004 [2024-12-15 19:45:08.731283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:47264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.004 [2024-12-15 19:45:08.731295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.004 [2024-12-15 19:45:08.731308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:47272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.004 [2024-12-15 19:45:08.731320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.004 [2024-12-15 19:45:08.731342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:47280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.004 [2024-12-15 19:45:08.731354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.004 [2024-12-15 19:45:08.731390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:47288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.004 [2024-12-15 19:45:08.731403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.004 [2024-12-15 19:45:08.731417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:46592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.004 [2024-12-15 19:45:08.731435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.004 [2024-12-15 19:45:08.731449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:46616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.004 [2024-12-15 19:45:08.731461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.004 [2024-12-15 19:45:08.731474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:46632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.004 [2024-12-15 19:45:08.731485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.004 [2024-12-15 19:45:08.731510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:46648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.004 [2024-12-15 19:45:08.731522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.004 [2024-12-15 19:45:08.731535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:46672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.004 [2024-12-15 19:45:08.731547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.004 [2024-12-15 19:45:08.731560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:46680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.004 [2024-12-15 19:45:08.731584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.004 [2024-12-15 19:45:08.731610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:46688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.004 [2024-12-15 19:45:08.731622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.004 [2024-12-15 19:45:08.731636] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9120 is same with the state(5) to be set 00:24:43.004 [2024-12-15 19:45:08.731651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.004 [2024-12-15 19:45:08.731661] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.004 [2024-12-15 19:45:08.731670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46736 len:8 PRP1 0x0 PRP2 0x0 00:24:43.004 [2024-12-15 19:45:08.731682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.004 [2024-12-15 19:45:08.731747] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8a9120 was disconnected and freed. reset controller. 00:24:43.004 [2024-12-15 19:45:08.731883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.004 [2024-12-15 19:45:08.731908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.004 [2024-12-15 19:45:08.731923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.004 [2024-12-15 19:45:08.731934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.004 [2024-12-15 19:45:08.731958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.004 [2024-12-15 19:45:08.731971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.004 [2024-12-15 19:45:08.731984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.004 [2024-12-15 19:45:08.732002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.004 [2024-12-15 19:45:08.732014] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b9b60 is same with the state(5) to be set 00:24:43.005 [2024-12-15 19:45:08.733188] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.005 [2024-12-15 19:45:08.733246] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b9b60 (9): Bad file descriptor 00:24:43.005 [2024-12-15 19:45:08.733358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.005 [2024-12-15 19:45:08.733412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.005 [2024-12-15 19:45:08.733440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b9b60 with addr=10.0.0.2, port=4421 00:24:43.005 [2024-12-15 19:45:08.733455] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b9b60 is same with the state(5) to be set 00:24:43.005 [2024-12-15 19:45:08.733477] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b9b60 (9): Bad file descriptor 00:24:43.005 [2024-12-15 19:45:08.733497] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.005 [2024-12-15 19:45:08.733510] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.005 [2024-12-15 19:45:08.733523] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.005 [2024-12-15 19:45:08.733545] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.005 [2024-12-15 19:45:08.733559] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.005 [2024-12-15 19:45:18.785687] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:43.005 Received shutdown signal, test time was about 55.770881 seconds 00:24:43.005 00:24:43.005 Latency(us) 00:24:43.005 [2024-12-15T19:45:29.901Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:43.005 [2024-12-15T19:45:29.901Z] Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:43.005 Verification LBA range: start 0x0 length 0x4000 00:24:43.005 Nvme0n1 : 55.77 11803.18 46.11 0.00 0.00 10826.61 1117.09 7015926.69 00:24:43.005 [2024-12-15T19:45:29.901Z] =================================================================================================================== 00:24:43.005 [2024-12-15T19:45:29.901Z] Total : 11803.18 46.11 0.00 0.00 10826.61 1117.09 7015926.69 00:24:43.005 19:45:29 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:43.005 19:45:29 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:24:43.005 19:45:29 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:43.005 19:45:29 -- host/multipath.sh@125 -- # nvmftestfini 00:24:43.005 19:45:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:43.005 19:45:29 -- nvmf/common.sh@116 -- # sync 00:24:43.005 19:45:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:43.005 19:45:29 -- nvmf/common.sh@119 -- # set +e 00:24:43.005 19:45:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:43.005 19:45:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:43.005 rmmod nvme_tcp 00:24:43.005 rmmod nvme_fabrics 00:24:43.005 rmmod nvme_keyring 00:24:43.005 19:45:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:43.005 19:45:29 -- nvmf/common.sh@123 -- # set -e 00:24:43.005 19:45:29 -- nvmf/common.sh@124 -- # return 0 00:24:43.005 19:45:29 -- nvmf/common.sh@477 -- # '[' -n 98857 ']' 00:24:43.005 19:45:29 -- nvmf/common.sh@478 -- # killprocess 98857 00:24:43.005 19:45:29 -- common/autotest_common.sh@936 -- # '[' -z 98857 ']' 00:24:43.005 19:45:29 -- common/autotest_common.sh@940 -- # kill -0 98857 00:24:43.005 19:45:29 -- common/autotest_common.sh@941 -- # uname 00:24:43.005 19:45:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:43.005 19:45:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98857 00:24:43.005 19:45:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:43.005 19:45:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:43.005 killing process with pid 98857 00:24:43.005 19:45:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98857' 00:24:43.005 19:45:29 -- common/autotest_common.sh@955 -- # kill 98857 00:24:43.005 19:45:29 -- common/autotest_common.sh@960 -- # wait 98857 00:24:43.272 19:45:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:43.272 19:45:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:43.272 19:45:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:43.272 19:45:30 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:43.272 19:45:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:43.272 19:45:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.272 19:45:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:43.272 19:45:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.272 19:45:30 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:43.272 00:24:43.272 real 1m2.513s 00:24:43.272 user 2m55.123s 00:24:43.272 sys 0m15.253s 00:24:43.272 19:45:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:43.272 19:45:30 -- common/autotest_common.sh@10 -- # set +x 00:24:43.272 ************************************ 00:24:43.272 END TEST nvmf_multipath 00:24:43.272 ************************************ 00:24:43.272 19:45:30 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:43.272 19:45:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:43.272 19:45:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:43.272 19:45:30 -- common/autotest_common.sh@10 -- # set +x 00:24:43.272 ************************************ 00:24:43.272 START TEST nvmf_timeout 00:24:43.272 ************************************ 00:24:43.272 19:45:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:43.530 * Looking for test storage... 00:24:43.530 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:43.530 19:45:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:43.530 19:45:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:43.530 19:45:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:43.530 19:45:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:43.530 19:45:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:43.530 19:45:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:43.530 19:45:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:43.530 19:45:30 -- scripts/common.sh@335 -- # IFS=.-: 00:24:43.530 19:45:30 -- scripts/common.sh@335 -- # read -ra ver1 00:24:43.530 19:45:30 -- scripts/common.sh@336 -- # IFS=.-: 00:24:43.530 19:45:30 -- scripts/common.sh@336 -- # read -ra ver2 00:24:43.530 19:45:30 -- scripts/common.sh@337 -- # local 'op=<' 00:24:43.530 19:45:30 -- scripts/common.sh@339 -- # ver1_l=2 00:24:43.530 19:45:30 -- scripts/common.sh@340 -- # ver2_l=1 00:24:43.530 19:45:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:43.530 19:45:30 -- scripts/common.sh@343 -- # case "$op" in 00:24:43.530 19:45:30 -- scripts/common.sh@344 -- # : 1 00:24:43.530 19:45:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:43.530 19:45:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:43.530 19:45:30 -- scripts/common.sh@364 -- # decimal 1 00:24:43.530 19:45:30 -- scripts/common.sh@352 -- # local d=1 00:24:43.530 19:45:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:43.530 19:45:30 -- scripts/common.sh@354 -- # echo 1 00:24:43.530 19:45:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:43.530 19:45:30 -- scripts/common.sh@365 -- # decimal 2 00:24:43.530 19:45:30 -- scripts/common.sh@352 -- # local d=2 00:24:43.530 19:45:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:43.530 19:45:30 -- scripts/common.sh@354 -- # echo 2 00:24:43.530 19:45:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:43.530 19:45:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:43.530 19:45:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:43.530 19:45:30 -- scripts/common.sh@367 -- # return 0 00:24:43.530 19:45:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:43.530 19:45:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:43.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.530 --rc genhtml_branch_coverage=1 00:24:43.530 --rc genhtml_function_coverage=1 00:24:43.530 --rc genhtml_legend=1 00:24:43.530 --rc geninfo_all_blocks=1 00:24:43.530 --rc geninfo_unexecuted_blocks=1 00:24:43.530 00:24:43.530 ' 00:24:43.530 19:45:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:43.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.530 --rc genhtml_branch_coverage=1 00:24:43.530 --rc genhtml_function_coverage=1 00:24:43.530 --rc genhtml_legend=1 00:24:43.530 --rc geninfo_all_blocks=1 00:24:43.530 --rc geninfo_unexecuted_blocks=1 00:24:43.530 00:24:43.530 ' 00:24:43.530 19:45:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:43.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.530 --rc genhtml_branch_coverage=1 00:24:43.530 --rc genhtml_function_coverage=1 00:24:43.530 --rc genhtml_legend=1 00:24:43.530 --rc geninfo_all_blocks=1 00:24:43.530 --rc geninfo_unexecuted_blocks=1 00:24:43.530 00:24:43.530 ' 00:24:43.530 19:45:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:43.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.530 --rc genhtml_branch_coverage=1 00:24:43.530 --rc genhtml_function_coverage=1 00:24:43.530 --rc genhtml_legend=1 00:24:43.530 --rc geninfo_all_blocks=1 00:24:43.530 --rc geninfo_unexecuted_blocks=1 00:24:43.530 00:24:43.530 ' 00:24:43.530 19:45:30 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:43.530 19:45:30 -- nvmf/common.sh@7 -- # uname -s 00:24:43.530 19:45:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:43.530 19:45:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:43.530 19:45:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:43.530 19:45:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:43.530 19:45:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:43.530 19:45:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:43.530 19:45:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:43.530 19:45:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:43.530 19:45:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:43.530 19:45:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:43.530 19:45:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:24:43.530 19:45:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:24:43.530 19:45:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:43.530 19:45:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:43.530 19:45:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:43.530 19:45:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:43.530 19:45:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:43.530 19:45:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:43.530 19:45:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:43.530 19:45:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.530 19:45:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.530 19:45:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.530 19:45:30 -- paths/export.sh@5 -- # export PATH 00:24:43.530 19:45:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.530 19:45:30 -- nvmf/common.sh@46 -- # : 0 00:24:43.530 19:45:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:43.530 19:45:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:43.530 19:45:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:43.530 19:45:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:43.530 19:45:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:43.530 19:45:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:43.530 19:45:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:43.530 19:45:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:43.530 19:45:30 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:43.530 19:45:30 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:43.530 19:45:30 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:43.530 19:45:30 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:43.530 19:45:30 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:43.530 19:45:30 -- host/timeout.sh@19 -- # nvmftestinit 00:24:43.530 19:45:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:43.530 19:45:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:43.530 19:45:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:43.530 19:45:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:43.530 19:45:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:43.530 19:45:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.530 19:45:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:43.530 19:45:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.530 19:45:30 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:24:43.530 19:45:30 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:24:43.530 19:45:30 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:24:43.530 19:45:30 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:24:43.531 19:45:30 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:24:43.531 19:45:30 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:24:43.531 19:45:30 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:43.531 19:45:30 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:43.531 19:45:30 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:43.531 19:45:30 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:24:43.531 19:45:30 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:43.531 19:45:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:43.531 19:45:30 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:43.531 19:45:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:43.531 19:45:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:43.531 19:45:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:43.531 19:45:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:43.531 19:45:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:43.531 19:45:30 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:24:43.531 19:45:30 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:24:43.531 Cannot find device "nvmf_tgt_br" 00:24:43.531 19:45:30 -- nvmf/common.sh@154 -- # true 00:24:43.531 19:45:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:24:43.531 Cannot find device "nvmf_tgt_br2" 00:24:43.531 19:45:30 -- nvmf/common.sh@155 -- # true 00:24:43.531 19:45:30 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:24:43.531 19:45:30 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:24:43.531 Cannot find device "nvmf_tgt_br" 00:24:43.531 19:45:30 -- nvmf/common.sh@157 -- # true 00:24:43.531 19:45:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:24:43.788 Cannot find device "nvmf_tgt_br2" 00:24:43.788 19:45:30 -- nvmf/common.sh@158 -- # true 00:24:43.788 19:45:30 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:24:43.788 19:45:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:24:43.788 19:45:30 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:43.788 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:43.788 19:45:30 -- nvmf/common.sh@161 -- # true 00:24:43.788 19:45:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:43.788 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:43.788 19:45:30 -- nvmf/common.sh@162 -- # true 00:24:43.788 19:45:30 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:24:43.788 19:45:30 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:43.788 19:45:30 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:43.788 19:45:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:43.788 19:45:30 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:43.788 19:45:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:43.788 19:45:30 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:43.788 19:45:30 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:43.788 19:45:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:43.788 19:45:30 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:24:43.788 19:45:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:24:43.788 19:45:30 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:24:43.788 19:45:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:24:43.788 19:45:30 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:43.788 19:45:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:43.788 19:45:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:43.788 19:45:30 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:24:43.788 19:45:30 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:24:43.788 19:45:30 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:24:43.788 19:45:30 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:43.788 19:45:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:43.788 19:45:30 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:43.788 19:45:30 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:43.788 19:45:30 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:24:43.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:43.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:24:43.788 00:24:43.788 --- 10.0.0.2 ping statistics --- 00:24:43.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.788 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:24:43.788 19:45:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:24:43.788 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:43.788 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:24:43.788 00:24:43.788 --- 10.0.0.3 ping statistics --- 00:24:43.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.788 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:24:43.788 19:45:30 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:44.047 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:44.047 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:24:44.047 00:24:44.047 --- 10.0.0.1 ping statistics --- 00:24:44.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.047 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:24:44.047 19:45:30 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:44.047 19:45:30 -- nvmf/common.sh@421 -- # return 0 00:24:44.047 19:45:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:44.047 19:45:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:44.047 19:45:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:44.047 19:45:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:44.047 19:45:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:44.047 19:45:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:44.047 19:45:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:44.047 19:45:30 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:24:44.047 19:45:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:44.047 19:45:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:44.047 19:45:30 -- common/autotest_common.sh@10 -- # set +x 00:24:44.047 19:45:30 -- nvmf/common.sh@469 -- # nvmfpid=100230 00:24:44.047 19:45:30 -- nvmf/common.sh@470 -- # waitforlisten 100230 00:24:44.047 19:45:30 -- common/autotest_common.sh@829 -- # '[' -z 100230 ']' 00:24:44.047 19:45:30 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:44.047 19:45:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:44.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:44.047 19:45:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:44.047 19:45:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:44.047 19:45:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:44.047 19:45:30 -- common/autotest_common.sh@10 -- # set +x 00:24:44.047 [2024-12-15 19:45:30.765457] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:44.047 [2024-12-15 19:45:30.765558] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:44.047 [2024-12-15 19:45:30.899287] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:44.306 [2024-12-15 19:45:30.974979] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:44.306 [2024-12-15 19:45:30.975123] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:44.306 [2024-12-15 19:45:30.975135] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:44.306 [2024-12-15 19:45:30.975143] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:44.306 [2024-12-15 19:45:30.975312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:44.306 [2024-12-15 19:45:30.975635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:44.873 19:45:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:44.873 19:45:31 -- common/autotest_common.sh@862 -- # return 0 00:24:44.873 19:45:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:44.873 19:45:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:44.873 19:45:31 -- common/autotest_common.sh@10 -- # set +x 00:24:45.131 19:45:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:45.131 19:45:31 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:45.131 19:45:31 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:45.390 [2024-12-15 19:45:32.028776] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:45.390 19:45:32 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:45.649 Malloc0 00:24:45.649 19:45:32 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:45.907 19:45:32 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:46.166 19:45:32 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:46.425 [2024-12-15 19:45:33.179679] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:46.425 19:45:33 -- host/timeout.sh@32 -- # bdevperf_pid=100331 00:24:46.425 19:45:33 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:46.425 19:45:33 -- host/timeout.sh@34 -- # waitforlisten 100331 /var/tmp/bdevperf.sock 00:24:46.425 19:45:33 -- common/autotest_common.sh@829 -- # '[' -z 100331 ']' 00:24:46.425 19:45:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:46.425 19:45:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:46.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:46.425 19:45:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:46.425 19:45:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:46.425 19:45:33 -- common/autotest_common.sh@10 -- # set +x 00:24:46.425 [2024-12-15 19:45:33.242950] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:46.425 [2024-12-15 19:45:33.243043] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100331 ] 00:24:46.684 [2024-12-15 19:45:33.367676] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.684 [2024-12-15 19:45:33.442143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:47.621 19:45:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:47.621 19:45:34 -- common/autotest_common.sh@862 -- # return 0 00:24:47.621 19:45:34 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:47.880 19:45:34 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:24:48.139 NVMe0n1 00:24:48.139 19:45:34 -- host/timeout.sh@51 -- # rpc_pid=100377 00:24:48.139 19:45:34 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:48.139 19:45:34 -- host/timeout.sh@53 -- # sleep 1 00:24:48.139 Running I/O for 10 seconds... 00:24:49.075 19:45:35 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:49.337 [2024-12-15 19:45:36.054605] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6a60 is same with the state(5) to be set 00:24:49.337 [2024-12-15 19:45:36.054670] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6a60 is same with the state(5) to be set 00:24:49.337 [2024-12-15 19:45:36.054682] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6a60 is same with the state(5) to be set 00:24:49.337 [2024-12-15 19:45:36.054690] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6a60 is same with the state(5) to be set 00:24:49.337 [2024-12-15 19:45:36.054698] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6a60 is same with the state(5) to be set 00:24:49.337 [2024-12-15 19:45:36.054706] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6a60 is same with the state(5) to be set 00:24:49.337 [2024-12-15 19:45:36.054714] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6a60 is same with the state(5) to be set 00:24:49.337 [2024-12-15 19:45:36.054721] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6a60 is same with the state(5) to be set 00:24:49.337 [2024-12-15 19:45:36.054736] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6a60 is same with the state(5) to be set 00:24:49.337 [2024-12-15 19:45:36.054743] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6a60 is same with the state(5) to be set 00:24:49.337 [2024-12-15 19:45:36.054751] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6a60 is same with the state(5) to be set 00:24:49.337 [2024-12-15 19:45:36.054758] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6a60 is same with the state(5) to be set 00:24:49.337 [2024-12-15 19:45:36.054767] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6a60 is same with the state(5) to be set 00:24:49.337 [2024-12-15 19:45:36.054774] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6a60 is same with the state(5) to be set 00:24:49.337 [2024-12-15 19:45:36.054781] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6a60 is same with the state(5) to be set 00:24:49.337 [2024-12-15 19:45:36.054788] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6a60 is same with the state(5) to be set 00:24:49.337 [2024-12-15 19:45:36.054795] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6a60 is same with the state(5) to be set 00:24:49.337 [2024-12-15 19:45:36.054803] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6a60 is same with the state(5) to be set 00:24:49.337 [2024-12-15 19:45:36.054810] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6a60 is same with the state(5) to be set 00:24:49.337 [2024-12-15 19:45:36.054831] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6a60 is same with the state(5) to be set 00:24:49.337 [2024-12-15 19:45:36.054842] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6a60 is same with the state(5) to be set 00:24:49.337 [2024-12-15 19:45:36.054851] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6a60 is same with the state(5) to be set 00:24:49.337 [2024-12-15 19:45:36.054858] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6a60 is same with the state(5) to be set 00:24:49.337 [2024-12-15 19:45:36.054865] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6a60 is same with the state(5) to be set 00:24:49.337 [2024-12-15 19:45:36.054874] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6a60 is same with the state(5) to be set 00:24:49.337 [2024-12-15 19:45:36.054881] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6a60 is same with the state(5) to be set 00:24:49.337 [2024-12-15 19:45:36.054889] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6a60 is same with the state(5) to be set 00:24:49.337 [2024-12-15 19:45:36.054898] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6a60 is same with the state(5) to be set 00:24:49.337 [2024-12-15 19:45:36.054905] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6a60 is same with the state(5) to be set 00:24:49.337 [2024-12-15 19:45:36.054913] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6a60 is same with the state(5) to be set 00:24:49.337 [2024-12-15 19:45:36.054920] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6a60 is same with the state(5) to be set 00:24:49.337 [2024-12-15 19:45:36.054928] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6a60 is same with the state(5) to be set 00:24:49.337 [2024-12-15 19:45:36.054935] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6a60 is same with the state(5) to be set 00:24:49.337 [2024-12-15 19:45:36.054942] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6a60 is same with the state(5) to be set 00:24:49.337 [2024-12-15 19:45:36.054956] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6a60 is same with the state(5) to be set 00:24:49.337 [2024-12-15 19:45:36.054964] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6a60 is same with the state(5) to be set 00:24:49.337 [2024-12-15 19:45:36.054971] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6a60 is same with the state(5) to be set 00:24:49.337 [2024-12-15 19:45:36.054978] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6a60 is same with the state(5) to be set 00:24:49.337 [2024-12-15 19:45:36.054988] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6a60 is same with the state(5) to be set 00:24:49.338 [2024-12-15 19:45:36.054995] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6a60 is same with the state(5) to be set 00:24:49.338 [2024-12-15 19:45:36.055318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.338 [2024-12-15 19:45:36.055358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.338 [2024-12-15 19:45:36.055383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.338 [2024-12-15 19:45:36.055393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.338 [2024-12-15 19:45:36.055403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.338 [2024-12-15 19:45:36.055412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.338 [2024-12-15 19:45:36.055422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.338 [2024-12-15 19:45:36.055432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.338 [2024-12-15 19:45:36.055442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.338 [2024-12-15 19:45:36.055450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.338 [2024-12-15 19:45:36.055460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.338 [2024-12-15 19:45:36.055468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.338 [2024-12-15 19:45:36.055478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.338 [2024-12-15 19:45:36.055487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.338 [2024-12-15 19:45:36.055496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.338 [2024-12-15 19:45:36.055504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.338 [2024-12-15 19:45:36.055514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.338 [2024-12-15 19:45:36.055523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.338 [2024-12-15 19:45:36.055532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.338 [2024-12-15 19:45:36.055540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.338 [2024-12-15 19:45:36.055550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.338 [2024-12-15 19:45:36.055558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.338 [2024-12-15 19:45:36.055568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.338 [2024-12-15 19:45:36.055576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.338 [2024-12-15 19:45:36.055585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.338 [2024-12-15 19:45:36.055595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.338 [2024-12-15 19:45:36.055607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.338 [2024-12-15 19:45:36.055616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.338 [2024-12-15 19:45:36.055626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.338 [2024-12-15 19:45:36.055635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.338 [2024-12-15 19:45:36.055645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.338 [2024-12-15 19:45:36.055653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.338 [2024-12-15 19:45:36.055663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.338 [2024-12-15 19:45:36.055671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.338 [2024-12-15 19:45:36.055681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.338 [2024-12-15 19:45:36.055694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.338 [2024-12-15 19:45:36.055704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.338 [2024-12-15 19:45:36.055712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.338 [2024-12-15 19:45:36.055722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.338 [2024-12-15 19:45:36.055730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.338 [2024-12-15 19:45:36.055741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.338 [2024-12-15 19:45:36.055749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.338 [2024-12-15 19:45:36.055758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.338 [2024-12-15 19:45:36.055766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.338 [2024-12-15 19:45:36.055776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.338 [2024-12-15 19:45:36.055785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.338 [2024-12-15 19:45:36.055795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.338 [2024-12-15 19:45:36.055804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.338 [2024-12-15 19:45:36.055830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.338 [2024-12-15 19:45:36.055845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.338 [2024-12-15 19:45:36.055855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.338 [2024-12-15 19:45:36.055863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.338 [2024-12-15 19:45:36.055874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.338 [2024-12-15 19:45:36.055882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.338 [2024-12-15 19:45:36.055892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.338 [2024-12-15 19:45:36.055900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.338 [2024-12-15 19:45:36.055910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.338 [2024-12-15 19:45:36.055920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.338 [2024-12-15 19:45:36.055931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.338 [2024-12-15 19:45:36.055940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.338 [2024-12-15 19:45:36.055950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.338 [2024-12-15 19:45:36.055958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.338 [2024-12-15 19:45:36.055968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.338 [2024-12-15 19:45:36.055977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.338 [2024-12-15 19:45:36.055987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.338 [2024-12-15 19:45:36.055996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.338 [2024-12-15 19:45:36.056006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.338 [2024-12-15 19:45:36.056014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.338 [2024-12-15 19:45:36.056024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.338 [2024-12-15 19:45:36.056033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.338 [2024-12-15 19:45:36.056043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.338 [2024-12-15 19:45:36.056052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.338 [2024-12-15 19:45:36.056062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.338 [2024-12-15 19:45:36.056071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.338 [2024-12-15 19:45:36.056081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.338 [2024-12-15 19:45:36.056089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.338 [2024-12-15 19:45:36.056099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.338 [2024-12-15 19:45:36.056107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.338 [2024-12-15 19:45:36.056117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.338 [2024-12-15 19:45:36.056125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.338 [2024-12-15 19:45:36.056135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.339 [2024-12-15 19:45:36.056144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.339 [2024-12-15 19:45:36.056153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.339 [2024-12-15 19:45:36.056162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.339 [2024-12-15 19:45:36.056172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.339 [2024-12-15 19:45:36.056182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.339 [2024-12-15 19:45:36.056199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.339 [2024-12-15 19:45:36.056207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.339 [2024-12-15 19:45:36.056217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.339 [2024-12-15 19:45:36.056226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.339 [2024-12-15 19:45:36.056236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.339 [2024-12-15 19:45:36.056245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.339 [2024-12-15 19:45:36.056255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.339 [2024-12-15 19:45:36.056263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.339 [2024-12-15 19:45:36.056273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.339 [2024-12-15 19:45:36.056281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.339 [2024-12-15 19:45:36.056291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.339 [2024-12-15 19:45:36.056299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.339 [2024-12-15 19:45:36.056309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.339 [2024-12-15 19:45:36.056317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.339 [2024-12-15 19:45:36.056326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.339 [2024-12-15 19:45:36.056335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.339 [2024-12-15 19:45:36.056345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.339 [2024-12-15 19:45:36.056353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.339 [2024-12-15 19:45:36.056363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.339 [2024-12-15 19:45:36.056372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.339 [2024-12-15 19:45:36.056381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.339 [2024-12-15 19:45:36.056390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.339 [2024-12-15 19:45:36.056399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.339 [2024-12-15 19:45:36.056408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.339 [2024-12-15 19:45:36.056417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.339 [2024-12-15 19:45:36.056425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.339 [2024-12-15 19:45:36.056435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.339 [2024-12-15 19:45:36.056443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.339 [2024-12-15 19:45:36.056453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.339 [2024-12-15 19:45:36.056462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.339 [2024-12-15 19:45:36.056473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.339 [2024-12-15 19:45:36.056482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.339 [2024-12-15 19:45:36.056492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.339 [2024-12-15 19:45:36.056501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.339 [2024-12-15 19:45:36.056511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.339 [2024-12-15 19:45:36.056522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.339 [2024-12-15 19:45:36.056533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.339 [2024-12-15 19:45:36.056542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.339 [2024-12-15 19:45:36.056552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.339 [2024-12-15 19:45:36.056560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.339 [2024-12-15 19:45:36.056570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.339 [2024-12-15 19:45:36.056579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.339 [2024-12-15 19:45:36.056589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.339 [2024-12-15 19:45:36.056597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.339 [2024-12-15 19:45:36.056606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.339 [2024-12-15 19:45:36.056614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.339 [2024-12-15 19:45:36.056624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.339 [2024-12-15 19:45:36.056632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.339 [2024-12-15 19:45:36.056643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.339 [2024-12-15 19:45:36.056651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.339 [2024-12-15 19:45:36.056661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.339 [2024-12-15 19:45:36.056669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.339 [2024-12-15 19:45:36.056680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.339 [2024-12-15 19:45:36.056688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.339 [2024-12-15 19:45:36.056698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.339 [2024-12-15 19:45:36.056706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.339 [2024-12-15 19:45:36.056716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.339 [2024-12-15 19:45:36.056724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.339 [2024-12-15 19:45:36.056734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.339 [2024-12-15 19:45:36.056743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.339 [2024-12-15 19:45:36.056753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.339 [2024-12-15 19:45:36.056762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.339 [2024-12-15 19:45:36.056772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.339 [2024-12-15 19:45:36.056781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.339 [2024-12-15 19:45:36.056791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.339 [2024-12-15 19:45:36.056800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.339 [2024-12-15 19:45:36.056810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.339 [2024-12-15 19:45:36.056828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.339 [2024-12-15 19:45:36.056848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.339 [2024-12-15 19:45:36.056857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.339 [2024-12-15 19:45:36.056867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.339 [2024-12-15 19:45:36.056875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.339 [2024-12-15 19:45:36.056885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.339 [2024-12-15 19:45:36.056894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.340 [2024-12-15 19:45:36.056904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.340 [2024-12-15 19:45:36.056912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.340 [2024-12-15 19:45:36.056922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.340 [2024-12-15 19:45:36.056930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.340 [2024-12-15 19:45:36.056941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.340 [2024-12-15 19:45:36.056949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.340 [2024-12-15 19:45:36.056959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.340 [2024-12-15 19:45:36.056967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.340 [2024-12-15 19:45:36.056977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.340 [2024-12-15 19:45:36.056985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.340 [2024-12-15 19:45:36.056995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.340 [2024-12-15 19:45:36.057003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.340 [2024-12-15 19:45:36.057014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.340 [2024-12-15 19:45:36.057023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.340 [2024-12-15 19:45:36.057033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.340 [2024-12-15 19:45:36.057043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.340 [2024-12-15 19:45:36.057053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.340 [2024-12-15 19:45:36.057061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.340 [2024-12-15 19:45:36.057071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.340 [2024-12-15 19:45:36.057079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.340 [2024-12-15 19:45:36.057095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.340 [2024-12-15 19:45:36.057104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.340 [2024-12-15 19:45:36.057114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.340 [2024-12-15 19:45:36.057122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.340 [2024-12-15 19:45:36.057132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.340 [2024-12-15 19:45:36.057140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.340 [2024-12-15 19:45:36.057156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.340 [2024-12-15 19:45:36.057164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.340 [2024-12-15 19:45:36.057174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.340 [2024-12-15 19:45:36.057184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.340 [2024-12-15 19:45:36.057194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.340 [2024-12-15 19:45:36.057202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.340 [2024-12-15 19:45:36.057212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.340 [2024-12-15 19:45:36.057221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.340 [2024-12-15 19:45:36.057231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.340 [2024-12-15 19:45:36.057239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.340 [2024-12-15 19:45:36.057249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.340 [2024-12-15 19:45:36.057257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.340 [2024-12-15 19:45:36.057267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.340 [2024-12-15 19:45:36.057276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.340 [2024-12-15 19:45:36.057285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.340 [2024-12-15 19:45:36.057293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.340 [2024-12-15 19:45:36.057303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.340 [2024-12-15 19:45:36.057311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.340 [2024-12-15 19:45:36.057321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:6408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.340 [2024-12-15 19:45:36.057329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.340 [2024-12-15 19:45:36.057339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.340 [2024-12-15 19:45:36.057347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.340 [2024-12-15 19:45:36.057356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.340 [2024-12-15 19:45:36.057364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.340 [2024-12-15 19:45:36.057374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.340 [2024-12-15 19:45:36.057382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.340 [2024-12-15 19:45:36.057397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.340 [2024-12-15 19:45:36.057406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.340 [2024-12-15 19:45:36.057416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.340 [2024-12-15 19:45:36.057424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.340 [2024-12-15 19:45:36.057435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.340 [2024-12-15 19:45:36.057443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.340 [2024-12-15 19:45:36.057458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.340 [2024-12-15 19:45:36.057467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.340 [2024-12-15 19:45:36.057478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.340 [2024-12-15 19:45:36.057486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.340 [2024-12-15 19:45:36.057498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.340 [2024-12-15 19:45:36.057507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.340 [2024-12-15 19:45:36.057517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.340 [2024-12-15 19:45:36.057525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.340 [2024-12-15 19:45:36.057536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.340 [2024-12-15 19:45:36.057544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.340 [2024-12-15 19:45:36.057554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.340 [2024-12-15 19:45:36.057561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.340 [2024-12-15 19:45:36.057571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.340 [2024-12-15 19:45:36.057579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.340 [2024-12-15 19:45:36.057590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.340 [2024-12-15 19:45:36.057598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.340 [2024-12-15 19:45:36.057608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.341 [2024-12-15 19:45:36.057619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.341 [2024-12-15 19:45:36.057629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.341 [2024-12-15 19:45:36.057637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.341 [2024-12-15 19:45:36.057646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.341 [2024-12-15 19:45:36.057654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.341 [2024-12-15 19:45:36.057665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.341 [2024-12-15 19:45:36.057673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.341 [2024-12-15 19:45:36.057683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.341 [2024-12-15 19:45:36.057691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.341 [2024-12-15 19:45:36.057701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.341 [2024-12-15 19:45:36.057710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.341 [2024-12-15 19:45:36.057719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.341 [2024-12-15 19:45:36.057728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.341 [2024-12-15 19:45:36.057737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.341 [2024-12-15 19:45:36.057746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.341 [2024-12-15 19:45:36.057761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.341 [2024-12-15 19:45:36.057770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.341 [2024-12-15 19:45:36.057779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.341 [2024-12-15 19:45:36.057787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.341 [2024-12-15 19:45:36.057797] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5db80 is same with the state(5) to be set 00:24:49.341 [2024-12-15 19:45:36.057808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:49.341 [2024-12-15 19:45:36.057826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:49.341 [2024-12-15 19:45:36.057836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6008 len:8 PRP1 0x0 PRP2 0x0 00:24:49.341 [2024-12-15 19:45:36.057845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.341 [2024-12-15 19:45:36.057905] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc5db80 was disconnected and freed. reset controller. 00:24:49.341 [2024-12-15 19:45:36.057987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:49.341 [2024-12-15 19:45:36.058003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.341 [2024-12-15 19:45:36.058013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:49.341 [2024-12-15 19:45:36.058021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.341 [2024-12-15 19:45:36.058029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:49.341 [2024-12-15 19:45:36.058038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.341 [2024-12-15 19:45:36.058046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:49.341 [2024-12-15 19:45:36.058054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.341 [2024-12-15 19:45:36.058063] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2c250 is same with the state(5) to be set 00:24:49.341 [2024-12-15 19:45:36.058270] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.341 [2024-12-15 19:45:36.058293] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc2c250 (9): Bad file descriptor 00:24:49.341 [2024-12-15 19:45:36.058406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.341 [2024-12-15 19:45:36.058455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.341 [2024-12-15 19:45:36.058471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc2c250 with addr=10.0.0.2, port=4420 00:24:49.341 [2024-12-15 19:45:36.058481] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2c250 is same with the state(5) to be set 00:24:49.341 [2024-12-15 19:45:36.058499] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc2c250 (9): Bad file descriptor 00:24:49.341 [2024-12-15 19:45:36.058513] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.341 [2024-12-15 19:45:36.058522] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.341 [2024-12-15 19:45:36.058532] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.341 [2024-12-15 19:45:36.058550] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.341 [2024-12-15 19:45:36.058560] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.341 19:45:36 -- host/timeout.sh@56 -- # sleep 2 00:24:51.245 [2024-12-15 19:45:38.058658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.245 [2024-12-15 19:45:38.058724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.245 [2024-12-15 19:45:38.058742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc2c250 with addr=10.0.0.2, port=4420 00:24:51.245 [2024-12-15 19:45:38.058753] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2c250 is same with the state(5) to be set 00:24:51.245 [2024-12-15 19:45:38.058770] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc2c250 (9): Bad file descriptor 00:24:51.245 [2024-12-15 19:45:38.058785] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.245 [2024-12-15 19:45:38.058793] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.245 [2024-12-15 19:45:38.058811] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.245 [2024-12-15 19:45:38.058848] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.245 [2024-12-15 19:45:38.058860] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.245 19:45:38 -- host/timeout.sh@57 -- # get_controller 00:24:51.245 19:45:38 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:51.245 19:45:38 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:51.504 19:45:38 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:24:51.504 19:45:38 -- host/timeout.sh@58 -- # get_bdev 00:24:51.504 19:45:38 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:51.504 19:45:38 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:51.763 19:45:38 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:24:51.763 19:45:38 -- host/timeout.sh@61 -- # sleep 5 00:24:53.668 [2024-12-15 19:45:40.059065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.668 [2024-12-15 19:45:40.059149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.668 [2024-12-15 19:45:40.059168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc2c250 with addr=10.0.0.2, port=4420 00:24:53.668 [2024-12-15 19:45:40.059180] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2c250 is same with the state(5) to be set 00:24:53.668 [2024-12-15 19:45:40.059217] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc2c250 (9): Bad file descriptor 00:24:53.668 [2024-12-15 19:45:40.059243] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.668 [2024-12-15 19:45:40.059254] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.668 [2024-12-15 19:45:40.059265] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.668 [2024-12-15 19:45:40.059310] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.668 [2024-12-15 19:45:40.059331] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.572 [2024-12-15 19:45:42.059351] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.572 [2024-12-15 19:45:42.059387] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.572 [2024-12-15 19:45:42.059398] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.572 [2024-12-15 19:45:42.059407] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:24:55.572 [2024-12-15 19:45:42.059425] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.508 00:24:56.508 Latency(us) 00:24:56.508 [2024-12-15T19:45:43.404Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:56.508 [2024-12-15T19:45:43.404Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:56.508 Verification LBA range: start 0x0 length 0x4000 00:24:56.508 NVMe0n1 : 8.10 2107.83 8.23 15.81 0.00 60197.06 2546.97 7015926.69 00:24:56.508 [2024-12-15T19:45:43.404Z] =================================================================================================================== 00:24:56.508 [2024-12-15T19:45:43.404Z] Total : 2107.83 8.23 15.81 0.00 60197.06 2546.97 7015926.69 00:24:56.508 0 00:24:56.766 19:45:43 -- host/timeout.sh@62 -- # get_controller 00:24:56.766 19:45:43 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:56.766 19:45:43 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:57.334 19:45:43 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:24:57.334 19:45:43 -- host/timeout.sh@63 -- # get_bdev 00:24:57.334 19:45:43 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:57.334 19:45:43 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:57.593 19:45:44 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:24:57.593 19:45:44 -- host/timeout.sh@65 -- # wait 100377 00:24:57.593 19:45:44 -- host/timeout.sh@67 -- # killprocess 100331 00:24:57.593 19:45:44 -- common/autotest_common.sh@936 -- # '[' -z 100331 ']' 00:24:57.593 19:45:44 -- common/autotest_common.sh@940 -- # kill -0 100331 00:24:57.593 19:45:44 -- common/autotest_common.sh@941 -- # uname 00:24:57.593 19:45:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:57.593 19:45:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100331 00:24:57.593 killing process with pid 100331 00:24:57.593 Received shutdown signal, test time was about 9.321820 seconds 00:24:57.593 00:24:57.593 Latency(us) 00:24:57.593 [2024-12-15T19:45:44.489Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.593 [2024-12-15T19:45:44.489Z] =================================================================================================================== 00:24:57.593 [2024-12-15T19:45:44.489Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:57.593 19:45:44 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:57.593 19:45:44 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:57.593 19:45:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100331' 00:24:57.593 19:45:44 -- common/autotest_common.sh@955 -- # kill 100331 00:24:57.593 19:45:44 -- common/autotest_common.sh@960 -- # wait 100331 00:24:57.851 19:45:44 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:58.110 [2024-12-15 19:45:44.799276] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:58.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:58.110 19:45:44 -- host/timeout.sh@74 -- # bdevperf_pid=100536 00:24:58.110 19:45:44 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:58.110 19:45:44 -- host/timeout.sh@76 -- # waitforlisten 100536 /var/tmp/bdevperf.sock 00:24:58.110 19:45:44 -- common/autotest_common.sh@829 -- # '[' -z 100536 ']' 00:24:58.110 19:45:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:58.110 19:45:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:58.110 19:45:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:58.110 19:45:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:58.110 19:45:44 -- common/autotest_common.sh@10 -- # set +x 00:24:58.110 [2024-12-15 19:45:44.855851] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:58.110 [2024-12-15 19:45:44.856325] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100536 ] 00:24:58.110 [2024-12-15 19:45:44.989650] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.370 [2024-12-15 19:45:45.062787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:59.305 19:45:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:59.306 19:45:45 -- common/autotest_common.sh@862 -- # return 0 00:24:59.306 19:45:45 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:59.306 19:45:46 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:24:59.872 NVMe0n1 00:24:59.872 19:45:46 -- host/timeout.sh@84 -- # rpc_pid=100585 00:24:59.872 19:45:46 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:59.872 19:45:46 -- host/timeout.sh@86 -- # sleep 1 00:24:59.872 Running I/O for 10 seconds... 00:25:00.806 19:45:47 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:01.068 [2024-12-15 19:45:47.770837] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.068 [2024-12-15 19:45:47.771180] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.068 [2024-12-15 19:45:47.771306] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.068 [2024-12-15 19:45:47.771435] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.068 [2024-12-15 19:45:47.771452] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.068 [2024-12-15 19:45:47.771461] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.068 [2024-12-15 19:45:47.771469] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.068 [2024-12-15 19:45:47.771477] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.068 [2024-12-15 19:45:47.771485] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.068 [2024-12-15 19:45:47.771493] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.068 [2024-12-15 19:45:47.771501] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.068 [2024-12-15 19:45:47.771510] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.068 [2024-12-15 19:45:47.771518] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.068 [2024-12-15 19:45:47.771525] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.068 [2024-12-15 19:45:47.771533] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.068 [2024-12-15 19:45:47.771540] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.068 [2024-12-15 19:45:47.771547] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.068 [2024-12-15 19:45:47.771558] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.068 [2024-12-15 19:45:47.771567] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.068 [2024-12-15 19:45:47.771575] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.068 [2024-12-15 19:45:47.771584] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.068 [2024-12-15 19:45:47.771591] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.068 [2024-12-15 19:45:47.771600] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.068 [2024-12-15 19:45:47.771608] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.068 [2024-12-15 19:45:47.771615] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.068 [2024-12-15 19:45:47.771623] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.068 [2024-12-15 19:45:47.771631] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.068 [2024-12-15 19:45:47.771639] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.068 [2024-12-15 19:45:47.771646] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.068 [2024-12-15 19:45:47.771655] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.068 [2024-12-15 19:45:47.771666] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.068 [2024-12-15 19:45:47.771675] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.068 [2024-12-15 19:45:47.771683] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.068 [2024-12-15 19:45:47.771691] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.068 [2024-12-15 19:45:47.771713] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.068 [2024-12-15 19:45:47.771720] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.068 [2024-12-15 19:45:47.771727] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.068 [2024-12-15 19:45:47.771734] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.068 [2024-12-15 19:45:47.771741] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.068 [2024-12-15 19:45:47.771748] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.069 [2024-12-15 19:45:47.771755] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.069 [2024-12-15 19:45:47.771762] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.069 [2024-12-15 19:45:47.771769] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.069 [2024-12-15 19:45:47.771776] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.069 [2024-12-15 19:45:47.771783] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.069 [2024-12-15 19:45:47.771789] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.069 [2024-12-15 19:45:47.771796] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.069 [2024-12-15 19:45:47.771803] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.069 [2024-12-15 19:45:47.771809] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.069 [2024-12-15 19:45:47.771816] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.069 [2024-12-15 19:45:47.771841] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.069 [2024-12-15 19:45:47.771869] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.069 [2024-12-15 19:45:47.771877] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.069 [2024-12-15 19:45:47.771885] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.069 [2024-12-15 19:45:47.771893] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.069 [2024-12-15 19:45:47.771900] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.069 [2024-12-15 19:45:47.771916] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.069 [2024-12-15 19:45:47.771923] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.069 [2024-12-15 19:45:47.771931] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.069 [2024-12-15 19:45:47.771939] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.069 [2024-12-15 19:45:47.771954] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.069 [2024-12-15 19:45:47.771961] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.069 [2024-12-15 19:45:47.771969] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.069 [2024-12-15 19:45:47.771977] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.069 [2024-12-15 19:45:47.771985] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.069 [2024-12-15 19:45:47.771992] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.069 [2024-12-15 19:45:47.772000] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.069 [2024-12-15 19:45:47.772008] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.069 [2024-12-15 19:45:47.772015] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cad90 is same with the state(5) to be set 00:25:01.069 [2024-12-15 19:45:47.772354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.069 [2024-12-15 19:45:47.772383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.069 [2024-12-15 19:45:47.772403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.069 [2024-12-15 19:45:47.772413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.069 [2024-12-15 19:45:47.772424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.069 [2024-12-15 19:45:47.772432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.069 [2024-12-15 19:45:47.772442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.069 [2024-12-15 19:45:47.772450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.069 [2024-12-15 19:45:47.772460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.069 [2024-12-15 19:45:47.772468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.069 [2024-12-15 19:45:47.772477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.069 [2024-12-15 19:45:47.772486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.069 [2024-12-15 19:45:47.772495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.069 [2024-12-15 19:45:47.772503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.069 [2024-12-15 19:45:47.772512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.069 [2024-12-15 19:45:47.772521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.069 [2024-12-15 19:45:47.772531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.069 [2024-12-15 19:45:47.772539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.069 [2024-12-15 19:45:47.772548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.069 [2024-12-15 19:45:47.772556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.069 [2024-12-15 19:45:47.772566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.069 [2024-12-15 19:45:47.772574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.069 [2024-12-15 19:45:47.772583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.069 [2024-12-15 19:45:47.772594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.069 [2024-12-15 19:45:47.772604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.069 [2024-12-15 19:45:47.772612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.069 [2024-12-15 19:45:47.772622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.069 [2024-12-15 19:45:47.772630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.069 [2024-12-15 19:45:47.772639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.069 [2024-12-15 19:45:47.772648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.069 [2024-12-15 19:45:47.772658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.069 [2024-12-15 19:45:47.772667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.069 [2024-12-15 19:45:47.772677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.069 [2024-12-15 19:45:47.772691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.069 [2024-12-15 19:45:47.772701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.069 [2024-12-15 19:45:47.772709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.069 [2024-12-15 19:45:47.772719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.069 [2024-12-15 19:45:47.772728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.069 [2024-12-15 19:45:47.772738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.069 [2024-12-15 19:45:47.772746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.069 [2024-12-15 19:45:47.772756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.069 [2024-12-15 19:45:47.772764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.069 [2024-12-15 19:45:47.772774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.069 [2024-12-15 19:45:47.772782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.069 [2024-12-15 19:45:47.772791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.069 [2024-12-15 19:45:47.772799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.069 [2024-12-15 19:45:47.772809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.069 [2024-12-15 19:45:47.772817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.069 [2024-12-15 19:45:47.772853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.069 [2024-12-15 19:45:47.772864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.069 [2024-12-15 19:45:47.772874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.069 [2024-12-15 19:45:47.772883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.069 [2024-12-15 19:45:47.772893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.069 [2024-12-15 19:45:47.772903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.070 [2024-12-15 19:45:47.772913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.070 [2024-12-15 19:45:47.772922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.070 [2024-12-15 19:45:47.772932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.070 [2024-12-15 19:45:47.772940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.070 [2024-12-15 19:45:47.772950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.070 [2024-12-15 19:45:47.772959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.070 [2024-12-15 19:45:47.772968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.070 [2024-12-15 19:45:47.772977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.070 [2024-12-15 19:45:47.772988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.070 [2024-12-15 19:45:47.772998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.070 [2024-12-15 19:45:47.773008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.070 [2024-12-15 19:45:47.773023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.070 [2024-12-15 19:45:47.773042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.070 [2024-12-15 19:45:47.773050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.070 [2024-12-15 19:45:47.773060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.070 [2024-12-15 19:45:47.773069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.070 [2024-12-15 19:45:47.773079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.070 [2024-12-15 19:45:47.773087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.070 [2024-12-15 19:45:47.773097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.070 [2024-12-15 19:45:47.773105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.070 [2024-12-15 19:45:47.773114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.070 [2024-12-15 19:45:47.773123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.070 [2024-12-15 19:45:47.773133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.070 [2024-12-15 19:45:47.773141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.070 [2024-12-15 19:45:47.773150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.070 [2024-12-15 19:45:47.773163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.070 [2024-12-15 19:45:47.773188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.070 [2024-12-15 19:45:47.773196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.070 [2024-12-15 19:45:47.773205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.070 [2024-12-15 19:45:47.773214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.070 [2024-12-15 19:45:47.773223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.070 [2024-12-15 19:45:47.773231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.070 [2024-12-15 19:45:47.773241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.070 [2024-12-15 19:45:47.773249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.070 [2024-12-15 19:45:47.773258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.070 [2024-12-15 19:45:47.773266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.070 [2024-12-15 19:45:47.773275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.070 [2024-12-15 19:45:47.773283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.070 [2024-12-15 19:45:47.773302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.070 [2024-12-15 19:45:47.773311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.070 [2024-12-15 19:45:47.773322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.070 [2024-12-15 19:45:47.773331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.070 [2024-12-15 19:45:47.773340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.070 [2024-12-15 19:45:47.773354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.070 [2024-12-15 19:45:47.773364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.070 [2024-12-15 19:45:47.773373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.070 [2024-12-15 19:45:47.773383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.070 [2024-12-15 19:45:47.773391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.070 [2024-12-15 19:45:47.773401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.070 [2024-12-15 19:45:47.773409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.070 [2024-12-15 19:45:47.773418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.070 [2024-12-15 19:45:47.773427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.070 [2024-12-15 19:45:47.773436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.070 [2024-12-15 19:45:47.773445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.070 [2024-12-15 19:45:47.773454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.070 [2024-12-15 19:45:47.773462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.070 [2024-12-15 19:45:47.773471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.070 [2024-12-15 19:45:47.773479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.070 [2024-12-15 19:45:47.773489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.070 [2024-12-15 19:45:47.773496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.070 [2024-12-15 19:45:47.773506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.070 [2024-12-15 19:45:47.773513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.070 [2024-12-15 19:45:47.773523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.070 [2024-12-15 19:45:47.773530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.070 [2024-12-15 19:45:47.773540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.070 [2024-12-15 19:45:47.773548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.070 [2024-12-15 19:45:47.773557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.070 [2024-12-15 19:45:47.773566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.070 [2024-12-15 19:45:47.773575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.070 [2024-12-15 19:45:47.773584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.070 [2024-12-15 19:45:47.773594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.070 [2024-12-15 19:45:47.773603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.070 [2024-12-15 19:45:47.773613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.070 [2024-12-15 19:45:47.773621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.070 [2024-12-15 19:45:47.773630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.070 [2024-12-15 19:45:47.773644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.070 [2024-12-15 19:45:47.773654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.070 [2024-12-15 19:45:47.773662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.070 [2024-12-15 19:45:47.773672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.070 [2024-12-15 19:45:47.773679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.070 [2024-12-15 19:45:47.773691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.071 [2024-12-15 19:45:47.773699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.071 [2024-12-15 19:45:47.773709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.071 [2024-12-15 19:45:47.773717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.071 [2024-12-15 19:45:47.773726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.071 [2024-12-15 19:45:47.773734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.071 [2024-12-15 19:45:47.773743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.071 [2024-12-15 19:45:47.773752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.071 [2024-12-15 19:45:47.773761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.071 [2024-12-15 19:45:47.773769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.071 [2024-12-15 19:45:47.773779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.071 [2024-12-15 19:45:47.773787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.071 [2024-12-15 19:45:47.773796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.071 [2024-12-15 19:45:47.773804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.071 [2024-12-15 19:45:47.773814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.071 [2024-12-15 19:45:47.773821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.071 [2024-12-15 19:45:47.773859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.071 [2024-12-15 19:45:47.773870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.071 [2024-12-15 19:45:47.773880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.071 [2024-12-15 19:45:47.773888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.071 [2024-12-15 19:45:47.773899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.071 [2024-12-15 19:45:47.773907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.071 [2024-12-15 19:45:47.773916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.071 [2024-12-15 19:45:47.773926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.071 [2024-12-15 19:45:47.773936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.071 [2024-12-15 19:45:47.773944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.071 [2024-12-15 19:45:47.773954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.071 [2024-12-15 19:45:47.773968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.071 [2024-12-15 19:45:47.773978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.071 [2024-12-15 19:45:47.773987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.071 [2024-12-15 19:45:47.773996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.071 [2024-12-15 19:45:47.774005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.071 [2024-12-15 19:45:47.774015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.071 [2024-12-15 19:45:47.774024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.071 [2024-12-15 19:45:47.774034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.071 [2024-12-15 19:45:47.774042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.071 [2024-12-15 19:45:47.774051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.071 [2024-12-15 19:45:47.774059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.071 [2024-12-15 19:45:47.774069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.071 [2024-12-15 19:45:47.774077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.071 [2024-12-15 19:45:47.774088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.071 [2024-12-15 19:45:47.774096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.071 [2024-12-15 19:45:47.774106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.071 [2024-12-15 19:45:47.774115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.071 [2024-12-15 19:45:47.774124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.071 [2024-12-15 19:45:47.774133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.071 [2024-12-15 19:45:47.774143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.071 [2024-12-15 19:45:47.774151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.071 [2024-12-15 19:45:47.774161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.071 [2024-12-15 19:45:47.774169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.071 [2024-12-15 19:45:47.774199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.071 [2024-12-15 19:45:47.774207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.071 [2024-12-15 19:45:47.774223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.071 [2024-12-15 19:45:47.774231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.071 [2024-12-15 19:45:47.774241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.071 [2024-12-15 19:45:47.774248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.071 [2024-12-15 19:45:47.774258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.071 [2024-12-15 19:45:47.774265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.071 [2024-12-15 19:45:47.774275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.071 [2024-12-15 19:45:47.774289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.071 [2024-12-15 19:45:47.774298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.071 [2024-12-15 19:45:47.774306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.071 [2024-12-15 19:45:47.774316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.071 [2024-12-15 19:45:47.774324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.071 [2024-12-15 19:45:47.774340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.071 [2024-12-15 19:45:47.774348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.071 [2024-12-15 19:45:47.774358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.071 [2024-12-15 19:45:47.774365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.071 [2024-12-15 19:45:47.774375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.071 [2024-12-15 19:45:47.774383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.071 [2024-12-15 19:45:47.774393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.071 [2024-12-15 19:45:47.774411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.071 [2024-12-15 19:45:47.774421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.071 [2024-12-15 19:45:47.774429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.071 [2024-12-15 19:45:47.774438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.071 [2024-12-15 19:45:47.774446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.071 [2024-12-15 19:45:47.774456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.071 [2024-12-15 19:45:47.774464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.071 [2024-12-15 19:45:47.774473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.071 [2024-12-15 19:45:47.774481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.071 [2024-12-15 19:45:47.774491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.071 [2024-12-15 19:45:47.774499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.072 [2024-12-15 19:45:47.774508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.072 [2024-12-15 19:45:47.774516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.072 [2024-12-15 19:45:47.774525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.072 [2024-12-15 19:45:47.774533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.072 [2024-12-15 19:45:47.774543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.072 [2024-12-15 19:45:47.774550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.072 [2024-12-15 19:45:47.774560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.072 [2024-12-15 19:45:47.774567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.072 [2024-12-15 19:45:47.774577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.072 [2024-12-15 19:45:47.774592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.072 [2024-12-15 19:45:47.774601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.072 [2024-12-15 19:45:47.774609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.072 [2024-12-15 19:45:47.774618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.072 [2024-12-15 19:45:47.774626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.072 [2024-12-15 19:45:47.774642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.072 [2024-12-15 19:45:47.774650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.072 [2024-12-15 19:45:47.774659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.072 [2024-12-15 19:45:47.774667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.072 [2024-12-15 19:45:47.774677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.072 [2024-12-15 19:45:47.774685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.072 [2024-12-15 19:45:47.774694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.072 [2024-12-15 19:45:47.774702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.072 [2024-12-15 19:45:47.774712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.072 [2024-12-15 19:45:47.774720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.072 [2024-12-15 19:45:47.774729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.072 [2024-12-15 19:45:47.774737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.072 [2024-12-15 19:45:47.774755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.072 [2024-12-15 19:45:47.774763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.072 [2024-12-15 19:45:47.774773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:13384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.072 [2024-12-15 19:45:47.774781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.072 [2024-12-15 19:45:47.774790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.072 [2024-12-15 19:45:47.774798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.072 [2024-12-15 19:45:47.774827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.072 [2024-12-15 19:45:47.774837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.072 [2024-12-15 19:45:47.774847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.072 [2024-12-15 19:45:47.774855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.072 [2024-12-15 19:45:47.774864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.072 [2024-12-15 19:45:47.774872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.072 [2024-12-15 19:45:47.774881] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148a9c0 is same with the state(5) to be set 00:25:01.072 [2024-12-15 19:45:47.774892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.072 [2024-12-15 19:45:47.774899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.072 [2024-12-15 19:45:47.774912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13472 len:8 PRP1 0x0 PRP2 0x0 00:25:01.072 [2024-12-15 19:45:47.774921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.072 [2024-12-15 19:45:47.774980] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x148a9c0 was disconnected and freed. reset controller. 00:25:01.072 [2024-12-15 19:45:47.775196] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.072 [2024-12-15 19:45:47.775279] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1459090 (9): Bad file descriptor 00:25:01.072 [2024-12-15 19:45:47.775387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.072 [2024-12-15 19:45:47.775430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.072 [2024-12-15 19:45:47.775446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1459090 with addr=10.0.0.2, port=4420 00:25:01.072 [2024-12-15 19:45:47.775455] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459090 is same with the state(5) to be set 00:25:01.072 [2024-12-15 19:45:47.775472] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1459090 (9): Bad file descriptor 00:25:01.072 [2024-12-15 19:45:47.775486] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.072 [2024-12-15 19:45:47.775495] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.072 [2024-12-15 19:45:47.775504] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.072 [2024-12-15 19:45:47.775523] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.072 [2024-12-15 19:45:47.775533] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.072 19:45:47 -- host/timeout.sh@90 -- # sleep 1 00:25:02.009 [2024-12-15 19:45:48.775598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.009 [2024-12-15 19:45:48.775664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.009 [2024-12-15 19:45:48.775681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1459090 with addr=10.0.0.2, port=4420 00:25:02.009 [2024-12-15 19:45:48.775698] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459090 is same with the state(5) to be set 00:25:02.009 [2024-12-15 19:45:48.775715] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1459090 (9): Bad file descriptor 00:25:02.009 [2024-12-15 19:45:48.775730] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.009 [2024-12-15 19:45:48.775738] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.009 [2024-12-15 19:45:48.775747] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.009 [2024-12-15 19:45:48.775764] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.009 [2024-12-15 19:45:48.775775] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.009 19:45:48 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:02.268 [2024-12-15 19:45:49.049087] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:02.268 19:45:49 -- host/timeout.sh@92 -- # wait 100585 00:25:03.241 [2024-12-15 19:45:49.788555] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:09.821 00:25:09.821 Latency(us) 00:25:09.821 [2024-12-15T19:45:56.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.821 [2024-12-15T19:45:56.717Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:09.821 Verification LBA range: start 0x0 length 0x4000 00:25:09.821 NVMe0n1 : 10.00 11324.91 44.24 0.00 0.00 11283.90 1184.12 3019898.88 00:25:09.821 [2024-12-15T19:45:56.717Z] =================================================================================================================== 00:25:09.821 [2024-12-15T19:45:56.717Z] Total : 11324.91 44.24 0.00 0.00 11283.90 1184.12 3019898.88 00:25:09.821 0 00:25:09.821 19:45:56 -- host/timeout.sh@97 -- # rpc_pid=100702 00:25:09.821 19:45:56 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:09.821 19:45:56 -- host/timeout.sh@98 -- # sleep 1 00:25:10.080 Running I/O for 10 seconds... 00:25:11.014 19:45:57 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:11.275 [2024-12-15 19:45:57.943383] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.275 [2024-12-15 19:45:57.943449] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.275 [2024-12-15 19:45:57.943460] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.275 [2024-12-15 19:45:57.943467] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.275 [2024-12-15 19:45:57.943475] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.275 [2024-12-15 19:45:57.943482] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.275 [2024-12-15 19:45:57.943490] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.275 [2024-12-15 19:45:57.943497] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.275 [2024-12-15 19:45:57.943504] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.275 [2024-12-15 19:45:57.943512] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.275 [2024-12-15 19:45:57.943520] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.275 [2024-12-15 19:45:57.943527] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.275 [2024-12-15 19:45:57.943534] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.275 [2024-12-15 19:45:57.943541] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.275 [2024-12-15 19:45:57.943548] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.275 [2024-12-15 19:45:57.943555] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.275 [2024-12-15 19:45:57.943562] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.275 [2024-12-15 19:45:57.943570] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.275 [2024-12-15 19:45:57.943577] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.275 [2024-12-15 19:45:57.943585] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.275 [2024-12-15 19:45:57.943593] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.275 [2024-12-15 19:45:57.943601] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.275 [2024-12-15 19:45:57.943608] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.275 [2024-12-15 19:45:57.943615] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.275 [2024-12-15 19:45:57.943622] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.275 [2024-12-15 19:45:57.943630] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.275 [2024-12-15 19:45:57.943637] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.275 [2024-12-15 19:45:57.943644] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.275 [2024-12-15 19:45:57.943652] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.275 [2024-12-15 19:45:57.943660] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.275 [2024-12-15 19:45:57.943667] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.276 [2024-12-15 19:45:57.943673] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.276 [2024-12-15 19:45:57.943680] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.276 [2024-12-15 19:45:57.943687] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.276 [2024-12-15 19:45:57.943696] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.276 [2024-12-15 19:45:57.943703] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.276 [2024-12-15 19:45:57.943710] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.276 [2024-12-15 19:45:57.943717] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.276 [2024-12-15 19:45:57.943724] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.276 [2024-12-15 19:45:57.943731] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.276 [2024-12-15 19:45:57.943738] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.276 [2024-12-15 19:45:57.943745] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.276 [2024-12-15 19:45:57.943752] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.276 [2024-12-15 19:45:57.943758] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.276 [2024-12-15 19:45:57.943765] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.276 [2024-12-15 19:45:57.943772] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.276 [2024-12-15 19:45:57.943779] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.276 [2024-12-15 19:45:57.943785] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.276 [2024-12-15 19:45:57.943793] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.276 [2024-12-15 19:45:57.943800] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.276 [2024-12-15 19:45:57.943807] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.276 [2024-12-15 19:45:57.943847] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.276 [2024-12-15 19:45:57.943867] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.276 [2024-12-15 19:45:57.943875] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.276 [2024-12-15 19:45:57.943882] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.276 [2024-12-15 19:45:57.943890] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.276 [2024-12-15 19:45:57.943898] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.276 [2024-12-15 19:45:57.943905] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.276 [2024-12-15 19:45:57.943913] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.276 [2024-12-15 19:45:57.943920] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.276 [2024-12-15 19:45:57.943928] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.276 [2024-12-15 19:45:57.943935] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.276 [2024-12-15 19:45:57.943941] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.276 [2024-12-15 19:45:57.943949] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17249c0 is same with the state(5) to be set 00:25:11.276 [2024-12-15 19:45:57.944283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.276 [2024-12-15 19:45:57.944312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.276 [2024-12-15 19:45:57.944354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.276 [2024-12-15 19:45:57.944364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.276 [2024-12-15 19:45:57.944376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.276 [2024-12-15 19:45:57.944385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.276 [2024-12-15 19:45:57.944395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.276 [2024-12-15 19:45:57.944403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.276 [2024-12-15 19:45:57.944413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.276 [2024-12-15 19:45:57.944421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.276 [2024-12-15 19:45:57.944430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.276 [2024-12-15 19:45:57.944438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.276 [2024-12-15 19:45:57.944448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.276 [2024-12-15 19:45:57.944456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.276 [2024-12-15 19:45:57.944465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.276 [2024-12-15 19:45:57.944473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.276 [2024-12-15 19:45:57.944482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.276 [2024-12-15 19:45:57.944490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.276 [2024-12-15 19:45:57.944500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.276 [2024-12-15 19:45:57.944507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.276 [2024-12-15 19:45:57.944516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.276 [2024-12-15 19:45:57.944524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.276 [2024-12-15 19:45:57.944533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.276 [2024-12-15 19:45:57.944541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.276 [2024-12-15 19:45:57.944550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.276 [2024-12-15 19:45:57.944558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.276 [2024-12-15 19:45:57.944569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.276 [2024-12-15 19:45:57.944577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.276 [2024-12-15 19:45:57.944587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.276 [2024-12-15 19:45:57.944595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.276 [2024-12-15 19:45:57.944604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.276 [2024-12-15 19:45:57.944611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.276 [2024-12-15 19:45:57.944621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.276 [2024-12-15 19:45:57.944632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.276 [2024-12-15 19:45:57.944643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.276 [2024-12-15 19:45:57.944651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.276 [2024-12-15 19:45:57.944660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.276 [2024-12-15 19:45:57.944668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.276 [2024-12-15 19:45:57.944678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.276 [2024-12-15 19:45:57.944686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.276 [2024-12-15 19:45:57.944696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.276 [2024-12-15 19:45:57.944703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.276 [2024-12-15 19:45:57.944713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.276 [2024-12-15 19:45:57.944721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.276 [2024-12-15 19:45:57.944730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.276 [2024-12-15 19:45:57.944738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.276 [2024-12-15 19:45:57.944747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.276 [2024-12-15 19:45:57.944763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.276 [2024-12-15 19:45:57.944773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.276 [2024-12-15 19:45:57.944782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.277 [2024-12-15 19:45:57.944792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.277 [2024-12-15 19:45:57.944801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.277 [2024-12-15 19:45:57.944810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.277 [2024-12-15 19:45:57.944819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.277 [2024-12-15 19:45:57.944858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.277 [2024-12-15 19:45:57.944869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.277 [2024-12-15 19:45:57.944880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.277 [2024-12-15 19:45:57.944888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.277 [2024-12-15 19:45:57.944898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.277 [2024-12-15 19:45:57.944907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.277 [2024-12-15 19:45:57.944917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.277 [2024-12-15 19:45:57.944925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.277 [2024-12-15 19:45:57.944935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.277 [2024-12-15 19:45:57.944943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.277 [2024-12-15 19:45:57.944953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.277 [2024-12-15 19:45:57.944962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.277 [2024-12-15 19:45:57.944972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.277 [2024-12-15 19:45:57.944980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.277 [2024-12-15 19:45:57.944990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.277 [2024-12-15 19:45:57.944999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.277 [2024-12-15 19:45:57.945009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.277 [2024-12-15 19:45:57.945017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.277 [2024-12-15 19:45:57.945027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.277 [2024-12-15 19:45:57.945035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.277 [2024-12-15 19:45:57.945045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.277 [2024-12-15 19:45:57.945053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.277 [2024-12-15 19:45:57.945063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.277 [2024-12-15 19:45:57.945072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.277 [2024-12-15 19:45:57.945082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.277 [2024-12-15 19:45:57.945090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.277 [2024-12-15 19:45:57.945100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.277 [2024-12-15 19:45:57.945108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.277 [2024-12-15 19:45:57.945118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.277 [2024-12-15 19:45:57.945126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.277 [2024-12-15 19:45:57.945136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.277 [2024-12-15 19:45:57.945149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.277 [2024-12-15 19:45:57.945160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.277 [2024-12-15 19:45:57.945168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.277 [2024-12-15 19:45:57.945177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.277 [2024-12-15 19:45:57.945186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.277 [2024-12-15 19:45:57.945218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.277 [2024-12-15 19:45:57.945226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.277 [2024-12-15 19:45:57.945236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.277 [2024-12-15 19:45:57.945245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.277 [2024-12-15 19:45:57.945255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.277 [2024-12-15 19:45:57.945264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.277 [2024-12-15 19:45:57.945274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.277 [2024-12-15 19:45:57.945283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.277 [2024-12-15 19:45:57.945294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.277 [2024-12-15 19:45:57.945302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.277 [2024-12-15 19:45:57.945312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.277 [2024-12-15 19:45:57.945320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.277 [2024-12-15 19:45:57.945329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.277 [2024-12-15 19:45:57.945338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.277 [2024-12-15 19:45:57.945355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.277 [2024-12-15 19:45:57.945363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.277 [2024-12-15 19:45:57.945373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.277 [2024-12-15 19:45:57.945381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.277 [2024-12-15 19:45:57.945391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.277 [2024-12-15 19:45:57.945399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.277 [2024-12-15 19:45:57.945408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.277 [2024-12-15 19:45:57.945416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.277 [2024-12-15 19:45:57.945426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.277 [2024-12-15 19:45:57.945434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.277 [2024-12-15 19:45:57.945448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.277 [2024-12-15 19:45:57.945456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.277 [2024-12-15 19:45:57.945465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.277 [2024-12-15 19:45:57.945474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.277 [2024-12-15 19:45:57.945484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.277 [2024-12-15 19:45:57.945492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.277 [2024-12-15 19:45:57.945501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.277 [2024-12-15 19:45:57.945509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.277 [2024-12-15 19:45:57.945519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.277 [2024-12-15 19:45:57.945526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.277 [2024-12-15 19:45:57.945536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.277 [2024-12-15 19:45:57.945543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.277 [2024-12-15 19:45:57.945553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.277 [2024-12-15 19:45:57.945560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.277 [2024-12-15 19:45:57.945570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.277 [2024-12-15 19:45:57.945579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.277 [2024-12-15 19:45:57.945588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.277 [2024-12-15 19:45:57.945596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.278 [2024-12-15 19:45:57.945606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.278 [2024-12-15 19:45:57.945614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.278 [2024-12-15 19:45:57.945623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.278 [2024-12-15 19:45:57.945631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.278 [2024-12-15 19:45:57.945640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.278 [2024-12-15 19:45:57.945648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.278 [2024-12-15 19:45:57.945657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.278 [2024-12-15 19:45:57.945665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.278 [2024-12-15 19:45:57.945674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.278 [2024-12-15 19:45:57.945682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.278 [2024-12-15 19:45:57.945692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.278 [2024-12-15 19:45:57.945701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.278 [2024-12-15 19:45:57.945710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.278 [2024-12-15 19:45:57.945718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.278 [2024-12-15 19:45:57.945729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.278 [2024-12-15 19:45:57.945737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.278 [2024-12-15 19:45:57.945747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.278 [2024-12-15 19:45:57.945755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.278 [2024-12-15 19:45:57.945764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.278 [2024-12-15 19:45:57.945772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.278 [2024-12-15 19:45:57.945781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.278 [2024-12-15 19:45:57.945790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.278 [2024-12-15 19:45:57.945799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.278 [2024-12-15 19:45:57.945807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.278 [2024-12-15 19:45:57.945817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.278 [2024-12-15 19:45:57.945825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.278 [2024-12-15 19:45:57.945843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.278 [2024-12-15 19:45:57.945853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.278 [2024-12-15 19:45:57.945863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.278 [2024-12-15 19:45:57.945871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.278 [2024-12-15 19:45:57.945881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.278 [2024-12-15 19:45:57.945889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.278 [2024-12-15 19:45:57.945898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.278 [2024-12-15 19:45:57.945906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.278 [2024-12-15 19:45:57.945915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.278 [2024-12-15 19:45:57.945924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.278 [2024-12-15 19:45:57.945934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.278 [2024-12-15 19:45:57.945942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.278 [2024-12-15 19:45:57.945951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.278 [2024-12-15 19:45:57.945959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.278 [2024-12-15 19:45:57.945968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.278 [2024-12-15 19:45:57.945976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.278 [2024-12-15 19:45:57.945986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.278 [2024-12-15 19:45:57.945993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.278 [2024-12-15 19:45:57.946007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.278 [2024-12-15 19:45:57.946014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.278 [2024-12-15 19:45:57.946026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.278 [2024-12-15 19:45:57.946034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.278 [2024-12-15 19:45:57.946043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.278 [2024-12-15 19:45:57.946051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.278 [2024-12-15 19:45:57.946060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.278 [2024-12-15 19:45:57.946068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.278 [2024-12-15 19:45:57.946078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.278 [2024-12-15 19:45:57.946087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.278 [2024-12-15 19:45:57.946096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.278 [2024-12-15 19:45:57.946104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.278 [2024-12-15 19:45:57.946121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.278 [2024-12-15 19:45:57.946129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.278 [2024-12-15 19:45:57.946138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.278 [2024-12-15 19:45:57.946147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.278 [2024-12-15 19:45:57.946156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.278 [2024-12-15 19:45:57.946164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.278 [2024-12-15 19:45:57.946174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.278 [2024-12-15 19:45:57.946181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.278 [2024-12-15 19:45:57.946190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.278 [2024-12-15 19:45:57.946198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.278 [2024-12-15 19:45:57.946207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.278 [2024-12-15 19:45:57.946215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.278 [2024-12-15 19:45:57.946224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.278 [2024-12-15 19:45:57.946233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.278 [2024-12-15 19:45:57.946242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.278 [2024-12-15 19:45:57.946251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.278 [2024-12-15 19:45:57.946260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.278 [2024-12-15 19:45:57.946268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.278 [2024-12-15 19:45:57.946277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.278 [2024-12-15 19:45:57.946285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.278 [2024-12-15 19:45:57.946295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.278 [2024-12-15 19:45:57.946302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.278 [2024-12-15 19:45:57.946313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.278 [2024-12-15 19:45:57.946321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.278 [2024-12-15 19:45:57.946330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.279 [2024-12-15 19:45:57.946338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.279 [2024-12-15 19:45:57.946347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.279 [2024-12-15 19:45:57.946355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.279 [2024-12-15 19:45:57.946365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.279 [2024-12-15 19:45:57.946373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.279 [2024-12-15 19:45:57.946383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.279 [2024-12-15 19:45:57.946391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.279 [2024-12-15 19:45:57.946415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.279 [2024-12-15 19:45:57.946436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.279 [2024-12-15 19:45:57.946445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.279 [2024-12-15 19:45:57.946453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.279 [2024-12-15 19:45:57.946463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.279 [2024-12-15 19:45:57.946471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.279 [2024-12-15 19:45:57.946480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.279 [2024-12-15 19:45:57.946488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.279 [2024-12-15 19:45:57.946497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.279 [2024-12-15 19:45:57.946505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.279 [2024-12-15 19:45:57.946514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.279 [2024-12-15 19:45:57.946521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.279 [2024-12-15 19:45:57.946531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.279 [2024-12-15 19:45:57.946539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.279 [2024-12-15 19:45:57.946548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.279 [2024-12-15 19:45:57.946556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.279 [2024-12-15 19:45:57.946566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.279 [2024-12-15 19:45:57.946573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.279 [2024-12-15 19:45:57.946582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.279 [2024-12-15 19:45:57.946590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.279 [2024-12-15 19:45:57.946599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.279 [2024-12-15 19:45:57.946607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.279 [2024-12-15 19:45:57.946620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.279 [2024-12-15 19:45:57.946629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.279 [2024-12-15 19:45:57.946638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.279 [2024-12-15 19:45:57.946646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.279 [2024-12-15 19:45:57.946655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.279 [2024-12-15 19:45:57.946664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.279 [2024-12-15 19:45:57.946673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.279 [2024-12-15 19:45:57.946681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.279 [2024-12-15 19:45:57.946690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.279 [2024-12-15 19:45:57.946698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.279 [2024-12-15 19:45:57.946713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.279 [2024-12-15 19:45:57.946720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.279 [2024-12-15 19:45:57.946729] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486010 is same with the state(5) to be set 00:25:11.279 [2024-12-15 19:45:57.946740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.279 [2024-12-15 19:45:57.946747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.279 [2024-12-15 19:45:57.946754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15880 len:8 PRP1 0x0 PRP2 0x0 00:25:11.279 [2024-12-15 19:45:57.946762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.279 [2024-12-15 19:45:57.946831] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1486010 was disconnected and freed. reset controller. 00:25:11.279 [2024-12-15 19:45:57.947022] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.279 [2024-12-15 19:45:57.947088] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1459090 (9): Bad file descriptor 00:25:11.279 [2024-12-15 19:45:57.947196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.279 [2024-12-15 19:45:57.947239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.279 [2024-12-15 19:45:57.947254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1459090 with addr=10.0.0.2, port=4420 00:25:11.279 [2024-12-15 19:45:57.947264] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459090 is same with the state(5) to be set 00:25:11.279 [2024-12-15 19:45:57.947281] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1459090 (9): Bad file descriptor 00:25:11.279 [2024-12-15 19:45:57.947295] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.279 [2024-12-15 19:45:57.947304] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.279 [2024-12-15 19:45:57.947314] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.279 [2024-12-15 19:45:57.947332] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.279 [2024-12-15 19:45:57.947343] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.279 19:45:57 -- host/timeout.sh@101 -- # sleep 3 00:25:12.214 [2024-12-15 19:45:58.947407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.214 [2024-12-15 19:45:58.947475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.214 [2024-12-15 19:45:58.947492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1459090 with addr=10.0.0.2, port=4420 00:25:12.214 [2024-12-15 19:45:58.947502] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459090 is same with the state(5) to be set 00:25:12.214 [2024-12-15 19:45:58.947519] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1459090 (9): Bad file descriptor 00:25:12.214 [2024-12-15 19:45:58.947534] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.214 [2024-12-15 19:45:58.947543] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.214 [2024-12-15 19:45:58.947551] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.214 [2024-12-15 19:45:58.947569] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.214 [2024-12-15 19:45:58.947579] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.148 [2024-12-15 19:45:59.947641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-12-15 19:45:59.947704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-12-15 19:45:59.947721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1459090 with addr=10.0.0.2, port=4420 00:25:13.148 [2024-12-15 19:45:59.947731] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459090 is same with the state(5) to be set 00:25:13.148 [2024-12-15 19:45:59.947748] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1459090 (9): Bad file descriptor 00:25:13.148 [2024-12-15 19:45:59.947763] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.148 [2024-12-15 19:45:59.947771] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.148 [2024-12-15 19:45:59.947780] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.148 [2024-12-15 19:45:59.947797] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.148 [2024-12-15 19:45:59.947807] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.082 [2024-12-15 19:46:00.949651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.083 [2024-12-15 19:46:00.949723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.083 [2024-12-15 19:46:00.949740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1459090 with addr=10.0.0.2, port=4420 00:25:14.083 [2024-12-15 19:46:00.949751] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459090 is same with the state(5) to be set 00:25:14.083 [2024-12-15 19:46:00.949907] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1459090 (9): Bad file descriptor 00:25:14.083 [2024-12-15 19:46:00.950088] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.083 [2024-12-15 19:46:00.950100] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.083 [2024-12-15 19:46:00.950110] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.083 [2024-12-15 19:46:00.952184] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.083 [2024-12-15 19:46:00.952211] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.083 19:46:00 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:14.341 [2024-12-15 19:46:01.211146] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:14.341 19:46:01 -- host/timeout.sh@103 -- # wait 100702 00:25:15.275 [2024-12-15 19:46:01.969380] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:20.543 00:25:20.544 Latency(us) 00:25:20.544 [2024-12-15T19:46:07.440Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.544 [2024-12-15T19:46:07.440Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:20.544 Verification LBA range: start 0x0 length 0x4000 00:25:20.544 NVMe0n1 : 10.01 9609.74 37.54 6995.02 0.00 7696.90 521.31 3019898.88 00:25:20.544 [2024-12-15T19:46:07.440Z] =================================================================================================================== 00:25:20.544 [2024-12-15T19:46:07.440Z] Total : 9609.74 37.54 6995.02 0.00 7696.90 0.00 3019898.88 00:25:20.544 0 00:25:20.544 19:46:06 -- host/timeout.sh@105 -- # killprocess 100536 00:25:20.544 19:46:06 -- common/autotest_common.sh@936 -- # '[' -z 100536 ']' 00:25:20.544 19:46:06 -- common/autotest_common.sh@940 -- # kill -0 100536 00:25:20.544 19:46:06 -- common/autotest_common.sh@941 -- # uname 00:25:20.544 19:46:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:20.544 19:46:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100536 00:25:20.544 killing process with pid 100536 00:25:20.544 Received shutdown signal, test time was about 10.000000 seconds 00:25:20.544 00:25:20.544 Latency(us) 00:25:20.544 [2024-12-15T19:46:07.440Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.544 [2024-12-15T19:46:07.440Z] =================================================================================================================== 00:25:20.544 [2024-12-15T19:46:07.440Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:20.544 19:46:06 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:20.544 19:46:06 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:20.544 19:46:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100536' 00:25:20.544 19:46:06 -- common/autotest_common.sh@955 -- # kill 100536 00:25:20.544 19:46:06 -- common/autotest_common.sh@960 -- # wait 100536 00:25:20.544 19:46:07 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:25:20.544 19:46:07 -- host/timeout.sh@110 -- # bdevperf_pid=100827 00:25:20.544 19:46:07 -- host/timeout.sh@112 -- # waitforlisten 100827 /var/tmp/bdevperf.sock 00:25:20.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:20.544 19:46:07 -- common/autotest_common.sh@829 -- # '[' -z 100827 ']' 00:25:20.544 19:46:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:20.544 19:46:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:20.544 19:46:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:20.544 19:46:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:20.544 19:46:07 -- common/autotest_common.sh@10 -- # set +x 00:25:20.544 [2024-12-15 19:46:07.149969] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:20.544 [2024-12-15 19:46:07.150211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100827 ] 00:25:20.544 [2024-12-15 19:46:07.284572] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.544 [2024-12-15 19:46:07.363225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:21.479 19:46:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:21.479 19:46:08 -- common/autotest_common.sh@862 -- # return 0 00:25:21.479 19:46:08 -- host/timeout.sh@116 -- # dtrace_pid=100851 00:25:21.479 19:46:08 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 100827 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:25:21.479 19:46:08 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:25:21.738 19:46:08 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:25:21.997 NVMe0n1 00:25:21.997 19:46:08 -- host/timeout.sh@124 -- # rpc_pid=100910 00:25:21.997 19:46:08 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:21.997 19:46:08 -- host/timeout.sh@125 -- # sleep 1 00:25:22.256 Running I/O for 10 seconds... 00:25:23.197 19:46:09 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:23.197 [2024-12-15 19:46:09.980577] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.197 [2024-12-15 19:46:09.980630] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.197 [2024-12-15 19:46:09.980640] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.197 [2024-12-15 19:46:09.980648] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.980656] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.980674] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.980681] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.980689] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.980699] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.980706] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.980721] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.980729] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.980737] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.980744] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.980751] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.980758] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.980765] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.980772] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.980779] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.980786] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.980793] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.980800] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.980807] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.980843] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.980864] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.980872] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.980880] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.980889] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.980903] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.980910] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.980926] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.980933] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.980941] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.980949] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.980957] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.980965] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.980972] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.980980] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.980987] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.980994] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.981002] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.981009] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.981017] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.981024] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.981032] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.981043] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.981051] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.981058] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.981065] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.981073] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.981081] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.981089] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.981097] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.981104] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.981111] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.981120] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.981128] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.981135] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.981143] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.981150] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.981157] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.981173] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.981180] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.981187] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.981201] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.981208] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.198 [2024-12-15 19:46:09.981231] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981240] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981247] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981254] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981261] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981269] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981276] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981283] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981289] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981296] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981302] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981310] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981317] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981324] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981330] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981337] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981344] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981351] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981359] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981372] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981378] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981385] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981392] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981399] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981405] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981412] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981419] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981426] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981432] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981439] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981445] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981452] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981458] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981466] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981473] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981480] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981488] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981494] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981501] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981508] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981515] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981523] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981530] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981537] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981544] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981551] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981558] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981564] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981571] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981579] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981587] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981593] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981601] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981608] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981617] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981624] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981631] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981639] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981646] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981652] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727db0 is same with the state(5) to be set 00:25:23.199 [2024-12-15 19:46:09.981982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:74336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.199 [2024-12-15 19:46:09.982019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.199 [2024-12-15 19:46:09.982042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:68592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.200 [2024-12-15 19:46:09.982052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.200 [2024-12-15 19:46:09.982063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.200 [2024-12-15 19:46:09.982071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.200 [2024-12-15 19:46:09.982081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.200 [2024-12-15 19:46:09.982090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.200 [2024-12-15 19:46:09.982100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.200 [2024-12-15 19:46:09.982109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.200 [2024-12-15 19:46:09.982118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:85712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.200 [2024-12-15 19:46:09.982127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.200 [2024-12-15 19:46:09.982136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.200 [2024-12-15 19:46:09.982144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.200 [2024-12-15 19:46:09.982154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:72600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.200 [2024-12-15 19:46:09.982161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.200 [2024-12-15 19:46:09.982171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:87288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.200 [2024-12-15 19:46:09.982178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.200 [2024-12-15 19:46:09.982188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:84288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.200 [2024-12-15 19:46:09.982198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.200 [2024-12-15 19:46:09.982207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:66016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.200 [2024-12-15 19:46:09.982214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.200 [2024-12-15 19:46:09.982224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.200 [2024-12-15 19:46:09.982232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.200 [2024-12-15 19:46:09.982243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.200 [2024-12-15 19:46:09.982251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.200 [2024-12-15 19:46:09.982261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:62008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.200 [2024-12-15 19:46:09.982269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.200 [2024-12-15 19:46:09.982278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:105424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.200 [2024-12-15 19:46:09.982288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.200 [2024-12-15 19:46:09.982298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.200 [2024-12-15 19:46:09.982306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.200 [2024-12-15 19:46:09.982316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:84448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.200 [2024-12-15 19:46:09.982326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.200 [2024-12-15 19:46:09.982336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.200 [2024-12-15 19:46:09.982345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.200 [2024-12-15 19:46:09.982355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.200 [2024-12-15 19:46:09.982370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.200 [2024-12-15 19:46:09.982380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.200 [2024-12-15 19:46:09.982388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.200 [2024-12-15 19:46:09.982400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.200 [2024-12-15 19:46:09.982408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.200 [2024-12-15 19:46:09.982427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:64120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.200 [2024-12-15 19:46:09.982437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.200 [2024-12-15 19:46:09.982447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:114072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.200 [2024-12-15 19:46:09.982455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.200 [2024-12-15 19:46:09.982465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:124640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.200 [2024-12-15 19:46:09.982473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.200 [2024-12-15 19:46:09.982482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:129872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.200 [2024-12-15 19:46:09.982490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.200 [2024-12-15 19:46:09.982500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:50968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.200 [2024-12-15 19:46:09.982508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.200 [2024-12-15 19:46:09.982518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:125056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.200 [2024-12-15 19:46:09.982525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.200 [2024-12-15 19:46:09.982535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:58160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.200 [2024-12-15 19:46:09.982543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.200 [2024-12-15 19:46:09.982552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:129264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.200 [2024-12-15 19:46:09.982560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.200 [2024-12-15 19:46:09.982569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.200 [2024-12-15 19:46:09.982577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.200 [2024-12-15 19:46:09.982587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:77120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.200 [2024-12-15 19:46:09.982596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.201 [2024-12-15 19:46:09.982606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:129928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.201 [2024-12-15 19:46:09.982614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.201 [2024-12-15 19:46:09.982625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:80736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.201 [2024-12-15 19:46:09.982634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.201 [2024-12-15 19:46:09.982644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:70096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.201 [2024-12-15 19:46:09.982652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.201 [2024-12-15 19:46:09.982662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:53016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.201 [2024-12-15 19:46:09.982671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.201 [2024-12-15 19:46:09.982681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.201 [2024-12-15 19:46:09.982690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.201 [2024-12-15 19:46:09.982699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:115360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.201 [2024-12-15 19:46:09.982708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.201 [2024-12-15 19:46:09.982718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.201 [2024-12-15 19:46:09.982725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.201 [2024-12-15 19:46:09.982735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.201 [2024-12-15 19:46:09.982742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.201 [2024-12-15 19:46:09.982752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.201 [2024-12-15 19:46:09.982767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.201 [2024-12-15 19:46:09.982776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:128056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.201 [2024-12-15 19:46:09.982784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.201 [2024-12-15 19:46:09.982794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.201 [2024-12-15 19:46:09.982802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.201 [2024-12-15 19:46:09.982812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:47816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.201 [2024-12-15 19:46:09.982832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.201 [2024-12-15 19:46:09.982843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.201 [2024-12-15 19:46:09.982852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.201 [2024-12-15 19:46:09.982869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.201 [2024-12-15 19:46:09.982877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.201 [2024-12-15 19:46:09.982887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:56296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.201 [2024-12-15 19:46:09.982895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.201 [2024-12-15 19:46:09.982912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:106136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.201 [2024-12-15 19:46:09.982920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.201 [2024-12-15 19:46:09.982930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.201 [2024-12-15 19:46:09.982938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.201 [2024-12-15 19:46:09.982948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.201 [2024-12-15 19:46:09.982955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.201 [2024-12-15 19:46:09.982966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:113800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.201 [2024-12-15 19:46:09.982973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.201 [2024-12-15 19:46:09.982982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.201 [2024-12-15 19:46:09.982991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.201 [2024-12-15 19:46:09.983002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.201 [2024-12-15 19:46:09.983010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.201 [2024-12-15 19:46:09.983019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.201 [2024-12-15 19:46:09.983028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.201 [2024-12-15 19:46:09.983038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:31488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.201 [2024-12-15 19:46:09.983046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.201 [2024-12-15 19:46:09.983056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:110728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.201 [2024-12-15 19:46:09.983063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.201 [2024-12-15 19:46:09.983073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:66328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.201 [2024-12-15 19:46:09.983081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.201 [2024-12-15 19:46:09.983091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:94360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.201 [2024-12-15 19:46:09.983098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.201 [2024-12-15 19:46:09.983108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.201 [2024-12-15 19:46:09.983116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.201 [2024-12-15 19:46:09.983126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:104880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.201 [2024-12-15 19:46:09.983133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.201 [2024-12-15 19:46:09.983142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:111504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.201 [2024-12-15 19:46:09.983150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.201 [2024-12-15 19:46:09.983160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:26904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.202 [2024-12-15 19:46:09.983168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.202 [2024-12-15 19:46:09.983177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:55456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.202 [2024-12-15 19:46:09.983188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.202 [2024-12-15 19:46:09.983203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:69848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.202 [2024-12-15 19:46:09.983211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.202 [2024-12-15 19:46:09.983221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.202 [2024-12-15 19:46:09.983228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.202 [2024-12-15 19:46:09.983238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:73840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.202 [2024-12-15 19:46:09.983246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.202 [2024-12-15 19:46:09.983256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.202 [2024-12-15 19:46:09.983263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.202 [2024-12-15 19:46:09.983273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.202 [2024-12-15 19:46:09.983281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.202 [2024-12-15 19:46:09.983290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.202 [2024-12-15 19:46:09.983298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.202 [2024-12-15 19:46:09.983308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:107208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.202 [2024-12-15 19:46:09.983317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.202 [2024-12-15 19:46:09.983331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.202 [2024-12-15 19:46:09.983349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.202 [2024-12-15 19:46:09.983359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.202 [2024-12-15 19:46:09.983377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.202 [2024-12-15 19:46:09.983387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:91648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.202 [2024-12-15 19:46:09.983395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.202 [2024-12-15 19:46:09.983405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.202 [2024-12-15 19:46:09.983413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.202 [2024-12-15 19:46:09.983422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:105104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.202 [2024-12-15 19:46:09.983430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.202 [2024-12-15 19:46:09.983439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:26176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.202 [2024-12-15 19:46:09.983447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.202 [2024-12-15 19:46:09.983457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:108528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.202 [2024-12-15 19:46:09.983465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.202 [2024-12-15 19:46:09.983474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:91080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.202 [2024-12-15 19:46:09.983483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.202 [2024-12-15 19:46:09.983492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:113656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.202 [2024-12-15 19:46:09.983500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.202 [2024-12-15 19:46:09.983514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:107768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.202 [2024-12-15 19:46:09.983523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.202 [2024-12-15 19:46:09.983533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:112288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.202 [2024-12-15 19:46:09.983540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.202 [2024-12-15 19:46:09.983550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:87552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.202 [2024-12-15 19:46:09.983558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.202 [2024-12-15 19:46:09.983567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.202 [2024-12-15 19:46:09.983576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.202 [2024-12-15 19:46:09.983585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:50432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.202 [2024-12-15 19:46:09.983592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.202 [2024-12-15 19:46:09.983601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:44320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.202 [2024-12-15 19:46:09.983609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.202 [2024-12-15 19:46:09.983619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.202 [2024-12-15 19:46:09.983627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.202 [2024-12-15 19:46:09.983641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.202 [2024-12-15 19:46:09.983649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.202 [2024-12-15 19:46:09.983659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:108824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.202 [2024-12-15 19:46:09.983667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.202 [2024-12-15 19:46:09.983685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.202 [2024-12-15 19:46:09.983694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.202 [2024-12-15 19:46:09.983711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:47104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.202 [2024-12-15 19:46:09.983719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.202 [2024-12-15 19:46:09.983729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.202 [2024-12-15 19:46:09.983741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.202 [2024-12-15 19:46:09.983751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:74112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.203 [2024-12-15 19:46:09.983758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.203 [2024-12-15 19:46:09.983767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.203 [2024-12-15 19:46:09.983775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.203 [2024-12-15 19:46:09.983785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:60488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.203 [2024-12-15 19:46:09.983793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.203 [2024-12-15 19:46:09.983802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:61152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.203 [2024-12-15 19:46:09.983810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.203 [2024-12-15 19:46:09.983873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:56320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.203 [2024-12-15 19:46:09.983883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.203 [2024-12-15 19:46:09.983893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:87480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.203 [2024-12-15 19:46:09.983901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.203 [2024-12-15 19:46:09.983911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:72096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.203 [2024-12-15 19:46:09.983919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.203 [2024-12-15 19:46:09.983929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:124840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.203 [2024-12-15 19:46:09.983937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.203 [2024-12-15 19:46:09.983947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.203 [2024-12-15 19:46:09.983955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.203 [2024-12-15 19:46:09.983965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:76048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.203 [2024-12-15 19:46:09.983973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.203 [2024-12-15 19:46:09.983983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.203 [2024-12-15 19:46:09.983991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.203 [2024-12-15 19:46:09.984005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.203 [2024-12-15 19:46:09.984014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.203 [2024-12-15 19:46:09.984024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.203 [2024-12-15 19:46:09.984032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.203 [2024-12-15 19:46:09.984042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:111224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.203 [2024-12-15 19:46:09.984050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.203 [2024-12-15 19:46:09.984060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:50424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.203 [2024-12-15 19:46:09.984068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.203 [2024-12-15 19:46:09.984078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.203 [2024-12-15 19:46:09.984085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.203 [2024-12-15 19:46:09.984095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:126048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.203 [2024-12-15 19:46:09.984103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.203 [2024-12-15 19:46:09.984114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:116240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.203 [2024-12-15 19:46:09.984122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.203 [2024-12-15 19:46:09.984132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:45968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.203 [2024-12-15 19:46:09.984141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.203 [2024-12-15 19:46:09.984150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.203 [2024-12-15 19:46:09.984160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.203 [2024-12-15 19:46:09.984191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.203 [2024-12-15 19:46:09.984200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.203 [2024-12-15 19:46:09.984226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.203 [2024-12-15 19:46:09.984234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.203 [2024-12-15 19:46:09.984243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.203 [2024-12-15 19:46:09.984251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.203 [2024-12-15 19:46:09.984261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:87984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.203 [2024-12-15 19:46:09.984269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.204 [2024-12-15 19:46:09.984279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:86776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.204 [2024-12-15 19:46:09.984287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.204 [2024-12-15 19:46:09.984297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.204 [2024-12-15 19:46:09.984305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.204 [2024-12-15 19:46:09.984315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:85256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.204 [2024-12-15 19:46:09.984322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.204 [2024-12-15 19:46:09.984337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.204 [2024-12-15 19:46:09.984346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.204 [2024-12-15 19:46:09.984356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:85808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.204 [2024-12-15 19:46:09.984373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.204 [2024-12-15 19:46:09.984383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:110080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.204 [2024-12-15 19:46:09.984392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.204 [2024-12-15 19:46:09.984402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.204 [2024-12-15 19:46:09.984410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.204 [2024-12-15 19:46:09.984420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:123416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.204 [2024-12-15 19:46:09.984429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.204 [2024-12-15 19:46:09.984438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.204 [2024-12-15 19:46:09.984447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.204 [2024-12-15 19:46:09.984457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.204 [2024-12-15 19:46:09.984464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.204 [2024-12-15 19:46:09.984474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:84632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.204 [2024-12-15 19:46:09.984482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.204 [2024-12-15 19:46:09.984491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.204 [2024-12-15 19:46:09.984500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.204 [2024-12-15 19:46:09.984510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:46792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.204 [2024-12-15 19:46:09.984519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.204 [2024-12-15 19:46:09.984528] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb5d10 is same with the state(5) to be set 00:25:23.204 [2024-12-15 19:46:09.984555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:23.204 [2024-12-15 19:46:09.984563] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:23.204 [2024-12-15 19:46:09.984570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27408 len:8 PRP1 0x0 PRP2 0x0 00:25:23.204 [2024-12-15 19:46:09.984578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.204 [2024-12-15 19:46:09.984637] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1cb5d10 was disconnected and freed. reset controller. 00:25:23.204 [2024-12-15 19:46:09.984899] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:23.204 [2024-12-15 19:46:09.984975] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c840b0 (9): Bad file descriptor 00:25:23.204 [2024-12-15 19:46:09.985091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:23.204 [2024-12-15 19:46:09.985150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:23.204 [2024-12-15 19:46:09.985165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c840b0 with addr=10.0.0.2, port=4420 00:25:23.204 [2024-12-15 19:46:09.985175] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c840b0 is same with the state(5) to be set 00:25:23.204 [2024-12-15 19:46:09.985193] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c840b0 (9): Bad file descriptor 00:25:23.204 [2024-12-15 19:46:09.985209] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:23.204 [2024-12-15 19:46:09.985218] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:23.204 [2024-12-15 19:46:09.985227] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:23.204 [2024-12-15 19:46:09.985246] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:23.204 [2024-12-15 19:46:09.985257] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:23.204 19:46:10 -- host/timeout.sh@128 -- # wait 100910 00:25:25.134 [2024-12-15 19:46:11.985380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:25.134 [2024-12-15 19:46:11.985483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:25.134 [2024-12-15 19:46:11.985500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c840b0 with addr=10.0.0.2, port=4420 00:25:25.134 [2024-12-15 19:46:11.985512] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c840b0 is same with the state(5) to be set 00:25:25.134 [2024-12-15 19:46:11.985541] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c840b0 (9): Bad file descriptor 00:25:25.134 [2024-12-15 19:46:11.985560] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:25.134 [2024-12-15 19:46:11.985569] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:25.134 [2024-12-15 19:46:11.985579] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:25.134 [2024-12-15 19:46:11.985600] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:25.134 [2024-12-15 19:46:11.985612] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:27.667 [2024-12-15 19:46:13.985692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.667 [2024-12-15 19:46:13.985787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.667 [2024-12-15 19:46:13.985803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c840b0 with addr=10.0.0.2, port=4420 00:25:27.667 [2024-12-15 19:46:13.985814] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c840b0 is same with the state(5) to be set 00:25:27.667 [2024-12-15 19:46:13.985843] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c840b0 (9): Bad file descriptor 00:25:27.667 [2024-12-15 19:46:13.985871] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:27.667 [2024-12-15 19:46:13.985882] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:27.667 [2024-12-15 19:46:13.985890] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:27.667 [2024-12-15 19:46:13.985907] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.667 [2024-12-15 19:46:13.985918] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.570 [2024-12-15 19:46:15.985946] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.570 [2024-12-15 19:46:15.985987] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.570 [2024-12-15 19:46:15.986012] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.570 [2024-12-15 19:46:15.986020] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:25:29.570 [2024-12-15 19:46:15.986039] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.137 00:25:30.137 Latency(us) 00:25:30.137 [2024-12-15T19:46:17.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:30.137 [2024-12-15T19:46:17.033Z] Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:25:30.137 NVMe0n1 : 8.09 3160.52 12.35 15.83 0.00 40266.32 3276.80 7015926.69 00:25:30.137 [2024-12-15T19:46:17.033Z] =================================================================================================================== 00:25:30.137 [2024-12-15T19:46:17.033Z] Total : 3160.52 12.35 15.83 0.00 40266.32 3276.80 7015926.69 00:25:30.137 0 00:25:30.137 19:46:17 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:30.137 Attaching 5 probes... 00:25:30.137 1315.834661: reset bdev controller NVMe0 00:25:30.137 1315.961240: reconnect bdev controller NVMe0 00:25:30.137 3316.219993: reconnect delay bdev controller NVMe0 00:25:30.137 3316.237653: reconnect bdev controller NVMe0 00:25:30.137 5316.574168: reconnect delay bdev controller NVMe0 00:25:30.137 5316.587190: reconnect bdev controller NVMe0 00:25:30.137 7316.876584: reconnect delay bdev controller NVMe0 00:25:30.137 7316.887995: reconnect bdev controller NVMe0 00:25:30.137 19:46:17 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:25:30.137 19:46:17 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:25:30.137 19:46:17 -- host/timeout.sh@136 -- # kill 100851 00:25:30.137 19:46:17 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:30.137 19:46:17 -- host/timeout.sh@139 -- # killprocess 100827 00:25:30.137 19:46:17 -- common/autotest_common.sh@936 -- # '[' -z 100827 ']' 00:25:30.137 19:46:17 -- common/autotest_common.sh@940 -- # kill -0 100827 00:25:30.137 19:46:17 -- common/autotest_common.sh@941 -- # uname 00:25:30.137 19:46:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:30.137 19:46:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100827 00:25:30.397 killing process with pid 100827 00:25:30.397 Received shutdown signal, test time was about 8.158883 seconds 00:25:30.397 00:25:30.397 Latency(us) 00:25:30.397 [2024-12-15T19:46:17.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:30.397 [2024-12-15T19:46:17.293Z] =================================================================================================================== 00:25:30.397 [2024-12-15T19:46:17.293Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:30.397 19:46:17 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:30.397 19:46:17 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:30.397 19:46:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100827' 00:25:30.397 19:46:17 -- common/autotest_common.sh@955 -- # kill 100827 00:25:30.397 19:46:17 -- common/autotest_common.sh@960 -- # wait 100827 00:25:30.656 19:46:17 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:30.656 19:46:17 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:25:30.656 19:46:17 -- host/timeout.sh@145 -- # nvmftestfini 00:25:30.656 19:46:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:30.656 19:46:17 -- nvmf/common.sh@116 -- # sync 00:25:30.915 19:46:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:30.915 19:46:17 -- nvmf/common.sh@119 -- # set +e 00:25:30.915 19:46:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:30.915 19:46:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:30.915 rmmod nvme_tcp 00:25:30.915 rmmod nvme_fabrics 00:25:30.915 rmmod nvme_keyring 00:25:30.915 19:46:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:30.915 19:46:17 -- nvmf/common.sh@123 -- # set -e 00:25:30.915 19:46:17 -- nvmf/common.sh@124 -- # return 0 00:25:30.915 19:46:17 -- nvmf/common.sh@477 -- # '[' -n 100230 ']' 00:25:30.915 19:46:17 -- nvmf/common.sh@478 -- # killprocess 100230 00:25:30.915 19:46:17 -- common/autotest_common.sh@936 -- # '[' -z 100230 ']' 00:25:30.915 19:46:17 -- common/autotest_common.sh@940 -- # kill -0 100230 00:25:30.915 19:46:17 -- common/autotest_common.sh@941 -- # uname 00:25:30.915 19:46:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:30.915 19:46:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100230 00:25:30.915 killing process with pid 100230 00:25:30.915 19:46:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:30.915 19:46:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:30.915 19:46:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100230' 00:25:30.915 19:46:17 -- common/autotest_common.sh@955 -- # kill 100230 00:25:30.915 19:46:17 -- common/autotest_common.sh@960 -- # wait 100230 00:25:31.173 19:46:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:31.173 19:46:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:31.173 19:46:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:31.173 19:46:17 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:31.173 19:46:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:31.173 19:46:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.173 19:46:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:31.173 19:46:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.173 19:46:17 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:31.173 00:25:31.173 real 0m47.776s 00:25:31.173 user 2m20.490s 00:25:31.173 sys 0m5.397s 00:25:31.173 19:46:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:31.173 19:46:17 -- common/autotest_common.sh@10 -- # set +x 00:25:31.173 ************************************ 00:25:31.173 END TEST nvmf_timeout 00:25:31.173 ************************************ 00:25:31.173 19:46:17 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:25:31.173 19:46:17 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:25:31.173 19:46:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:31.173 19:46:17 -- common/autotest_common.sh@10 -- # set +x 00:25:31.173 19:46:18 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:25:31.173 00:25:31.173 real 17m39.841s 00:25:31.173 user 56m5.056s 00:25:31.173 sys 3m57.542s 00:25:31.173 19:46:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:31.173 19:46:18 -- common/autotest_common.sh@10 -- # set +x 00:25:31.173 ************************************ 00:25:31.173 END TEST nvmf_tcp 00:25:31.173 ************************************ 00:25:31.173 19:46:18 -- spdk/autotest.sh@283 -- # [[ 0 -eq 0 ]] 00:25:31.173 19:46:18 -- spdk/autotest.sh@284 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:31.173 19:46:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:31.173 19:46:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:31.173 19:46:18 -- common/autotest_common.sh@10 -- # set +x 00:25:31.173 ************************************ 00:25:31.173 START TEST spdkcli_nvmf_tcp 00:25:31.173 ************************************ 00:25:31.173 19:46:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:31.433 * Looking for test storage... 00:25:31.433 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:25:31.433 19:46:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:31.433 19:46:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:31.433 19:46:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:31.433 19:46:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:31.433 19:46:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:31.433 19:46:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:31.433 19:46:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:31.433 19:46:18 -- scripts/common.sh@335 -- # IFS=.-: 00:25:31.433 19:46:18 -- scripts/common.sh@335 -- # read -ra ver1 00:25:31.433 19:46:18 -- scripts/common.sh@336 -- # IFS=.-: 00:25:31.433 19:46:18 -- scripts/common.sh@336 -- # read -ra ver2 00:25:31.433 19:46:18 -- scripts/common.sh@337 -- # local 'op=<' 00:25:31.433 19:46:18 -- scripts/common.sh@339 -- # ver1_l=2 00:25:31.433 19:46:18 -- scripts/common.sh@340 -- # ver2_l=1 00:25:31.433 19:46:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:31.433 19:46:18 -- scripts/common.sh@343 -- # case "$op" in 00:25:31.433 19:46:18 -- scripts/common.sh@344 -- # : 1 00:25:31.433 19:46:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:31.433 19:46:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:31.433 19:46:18 -- scripts/common.sh@364 -- # decimal 1 00:25:31.433 19:46:18 -- scripts/common.sh@352 -- # local d=1 00:25:31.433 19:46:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:31.433 19:46:18 -- scripts/common.sh@354 -- # echo 1 00:25:31.433 19:46:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:31.433 19:46:18 -- scripts/common.sh@365 -- # decimal 2 00:25:31.433 19:46:18 -- scripts/common.sh@352 -- # local d=2 00:25:31.433 19:46:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:31.433 19:46:18 -- scripts/common.sh@354 -- # echo 2 00:25:31.433 19:46:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:31.433 19:46:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:31.433 19:46:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:31.433 19:46:18 -- scripts/common.sh@367 -- # return 0 00:25:31.433 19:46:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:31.433 19:46:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:31.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.433 --rc genhtml_branch_coverage=1 00:25:31.433 --rc genhtml_function_coverage=1 00:25:31.433 --rc genhtml_legend=1 00:25:31.433 --rc geninfo_all_blocks=1 00:25:31.433 --rc geninfo_unexecuted_blocks=1 00:25:31.433 00:25:31.433 ' 00:25:31.433 19:46:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:31.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.433 --rc genhtml_branch_coverage=1 00:25:31.433 --rc genhtml_function_coverage=1 00:25:31.433 --rc genhtml_legend=1 00:25:31.433 --rc geninfo_all_blocks=1 00:25:31.433 --rc geninfo_unexecuted_blocks=1 00:25:31.433 00:25:31.433 ' 00:25:31.433 19:46:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:31.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.433 --rc genhtml_branch_coverage=1 00:25:31.433 --rc genhtml_function_coverage=1 00:25:31.433 --rc genhtml_legend=1 00:25:31.433 --rc geninfo_all_blocks=1 00:25:31.433 --rc geninfo_unexecuted_blocks=1 00:25:31.433 00:25:31.433 ' 00:25:31.433 19:46:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:31.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.433 --rc genhtml_branch_coverage=1 00:25:31.433 --rc genhtml_function_coverage=1 00:25:31.433 --rc genhtml_legend=1 00:25:31.433 --rc geninfo_all_blocks=1 00:25:31.433 --rc geninfo_unexecuted_blocks=1 00:25:31.433 00:25:31.433 ' 00:25:31.433 19:46:18 -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:25:31.433 19:46:18 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:25:31.433 19:46:18 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:25:31.433 19:46:18 -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:31.433 19:46:18 -- nvmf/common.sh@7 -- # uname -s 00:25:31.433 19:46:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:31.433 19:46:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:31.433 19:46:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:31.433 19:46:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:31.433 19:46:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:31.433 19:46:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:31.433 19:46:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:31.433 19:46:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:31.433 19:46:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:31.433 19:46:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:31.433 19:46:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:25:31.433 19:46:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:25:31.433 19:46:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:31.433 19:46:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:31.433 19:46:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:31.433 19:46:18 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:31.433 19:46:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:31.433 19:46:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:31.433 19:46:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:31.433 19:46:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.433 19:46:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.433 19:46:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.433 19:46:18 -- paths/export.sh@5 -- # export PATH 00:25:31.433 19:46:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.433 19:46:18 -- nvmf/common.sh@46 -- # : 0 00:25:31.433 19:46:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:31.433 19:46:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:31.433 19:46:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:31.433 19:46:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:31.433 19:46:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:31.433 19:46:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:31.433 19:46:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:31.433 19:46:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:31.433 19:46:18 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:31.433 19:46:18 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:31.433 19:46:18 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:31.433 19:46:18 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:31.433 19:46:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:31.433 19:46:18 -- common/autotest_common.sh@10 -- # set +x 00:25:31.433 19:46:18 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:31.433 19:46:18 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=101132 00:25:31.433 19:46:18 -- spdkcli/common.sh@34 -- # waitforlisten 101132 00:25:31.433 19:46:18 -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:31.433 19:46:18 -- common/autotest_common.sh@829 -- # '[' -z 101132 ']' 00:25:31.433 19:46:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:31.433 19:46:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:31.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:31.433 19:46:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:31.433 19:46:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:31.433 19:46:18 -- common/autotest_common.sh@10 -- # set +x 00:25:31.693 [2024-12-15 19:46:18.346880] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:31.693 [2024-12-15 19:46:18.346987] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101132 ] 00:25:31.693 [2024-12-15 19:46:18.486495] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:31.693 [2024-12-15 19:46:18.572430] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:31.693 [2024-12-15 19:46:18.572752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:31.693 [2024-12-15 19:46:18.572779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.629 19:46:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:32.629 19:46:19 -- common/autotest_common.sh@862 -- # return 0 00:25:32.629 19:46:19 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:32.629 19:46:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:32.629 19:46:19 -- common/autotest_common.sh@10 -- # set +x 00:25:32.629 19:46:19 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:32.629 19:46:19 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:25:32.629 19:46:19 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:32.629 19:46:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:32.629 19:46:19 -- common/autotest_common.sh@10 -- # set +x 00:25:32.629 19:46:19 -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:32.629 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:32.629 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:32.629 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:32.629 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:32.629 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:32.629 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:32.629 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:32.629 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:32.629 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:32.629 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:32.629 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:32.629 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:32.629 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:32.629 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:32.629 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:32.629 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:32.629 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:32.629 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:32.629 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:32.629 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:32.629 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:32.629 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:32.629 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:25:32.629 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:32.629 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:32.629 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:32.629 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:32.629 ' 00:25:33.197 [2024-12-15 19:46:19.917023] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:35.731 [2024-12-15 19:46:22.184235] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:36.681 [2024-12-15 19:46:23.473653] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:39.214 [2024-12-15 19:46:25.868278] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:41.116 [2024-12-15 19:46:27.933904] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:43.021 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:43.021 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:43.021 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:43.021 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:43.021 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:43.021 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:43.021 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:43.021 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:43.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:43.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:43.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:43.021 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:43.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:43.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:43.021 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:43.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:43.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:43.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:43.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:43.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:43.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:43.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:43.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:43.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:43.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:43.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:43.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:43.021 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:43.021 19:46:29 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:43.021 19:46:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:43.021 19:46:29 -- common/autotest_common.sh@10 -- # set +x 00:25:43.021 19:46:29 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:43.021 19:46:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:43.021 19:46:29 -- common/autotest_common.sh@10 -- # set +x 00:25:43.021 19:46:29 -- spdkcli/nvmf.sh@69 -- # check_match 00:25:43.021 19:46:29 -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:25:43.280 19:46:30 -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:43.280 19:46:30 -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:43.280 19:46:30 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:43.280 19:46:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:43.280 19:46:30 -- common/autotest_common.sh@10 -- # set +x 00:25:43.539 19:46:30 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:43.539 19:46:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:43.539 19:46:30 -- common/autotest_common.sh@10 -- # set +x 00:25:43.539 19:46:30 -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:43.539 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:43.539 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:43.539 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:43.539 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:43.539 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:43.539 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:43.539 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:43.539 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:43.539 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:43.539 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:43.539 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:43.539 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:43.540 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:43.540 ' 00:25:48.808 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:48.808 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:48.808 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:48.808 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:48.808 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:48.808 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:48.808 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:48.808 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:48.808 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:48.808 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:48.808 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:48.808 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:48.808 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:48.808 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:48.808 19:46:35 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:48.808 19:46:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:48.808 19:46:35 -- common/autotest_common.sh@10 -- # set +x 00:25:49.067 19:46:35 -- spdkcli/nvmf.sh@90 -- # killprocess 101132 00:25:49.067 19:46:35 -- common/autotest_common.sh@936 -- # '[' -z 101132 ']' 00:25:49.067 19:46:35 -- common/autotest_common.sh@940 -- # kill -0 101132 00:25:49.067 19:46:35 -- common/autotest_common.sh@941 -- # uname 00:25:49.067 19:46:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:49.067 19:46:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 101132 00:25:49.067 19:46:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:49.067 19:46:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:49.067 19:46:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 101132' 00:25:49.067 killing process with pid 101132 00:25:49.067 19:46:35 -- common/autotest_common.sh@955 -- # kill 101132 00:25:49.067 [2024-12-15 19:46:35.766109] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:49.067 19:46:35 -- common/autotest_common.sh@960 -- # wait 101132 00:25:49.326 19:46:36 -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:49.326 19:46:36 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:49.326 19:46:36 -- spdkcli/common.sh@13 -- # '[' -n 101132 ']' 00:25:49.326 19:46:36 -- spdkcli/common.sh@14 -- # killprocess 101132 00:25:49.326 19:46:36 -- common/autotest_common.sh@936 -- # '[' -z 101132 ']' 00:25:49.326 19:46:36 -- common/autotest_common.sh@940 -- # kill -0 101132 00:25:49.326 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (101132) - No such process 00:25:49.326 Process with pid 101132 is not found 00:25:49.326 19:46:36 -- common/autotest_common.sh@963 -- # echo 'Process with pid 101132 is not found' 00:25:49.326 19:46:36 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:49.326 19:46:36 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:49.326 19:46:36 -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:49.326 00:25:49.326 real 0m17.969s 00:25:49.326 user 0m38.919s 00:25:49.326 sys 0m0.949s 00:25:49.326 19:46:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:49.326 19:46:36 -- common/autotest_common.sh@10 -- # set +x 00:25:49.326 ************************************ 00:25:49.326 END TEST spdkcli_nvmf_tcp 00:25:49.326 ************************************ 00:25:49.326 19:46:36 -- spdk/autotest.sh@285 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:49.326 19:46:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:49.326 19:46:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:49.326 19:46:36 -- common/autotest_common.sh@10 -- # set +x 00:25:49.326 ************************************ 00:25:49.326 START TEST nvmf_identify_passthru 00:25:49.326 ************************************ 00:25:49.326 19:46:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:49.326 * Looking for test storage... 00:25:49.326 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:49.326 19:46:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:49.326 19:46:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:49.326 19:46:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:49.585 19:46:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:49.585 19:46:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:49.585 19:46:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:49.585 19:46:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:49.585 19:46:36 -- scripts/common.sh@335 -- # IFS=.-: 00:25:49.585 19:46:36 -- scripts/common.sh@335 -- # read -ra ver1 00:25:49.585 19:46:36 -- scripts/common.sh@336 -- # IFS=.-: 00:25:49.585 19:46:36 -- scripts/common.sh@336 -- # read -ra ver2 00:25:49.585 19:46:36 -- scripts/common.sh@337 -- # local 'op=<' 00:25:49.585 19:46:36 -- scripts/common.sh@339 -- # ver1_l=2 00:25:49.585 19:46:36 -- scripts/common.sh@340 -- # ver2_l=1 00:25:49.585 19:46:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:49.585 19:46:36 -- scripts/common.sh@343 -- # case "$op" in 00:25:49.585 19:46:36 -- scripts/common.sh@344 -- # : 1 00:25:49.585 19:46:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:49.585 19:46:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:49.585 19:46:36 -- scripts/common.sh@364 -- # decimal 1 00:25:49.585 19:46:36 -- scripts/common.sh@352 -- # local d=1 00:25:49.585 19:46:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:49.585 19:46:36 -- scripts/common.sh@354 -- # echo 1 00:25:49.585 19:46:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:49.585 19:46:36 -- scripts/common.sh@365 -- # decimal 2 00:25:49.585 19:46:36 -- scripts/common.sh@352 -- # local d=2 00:25:49.585 19:46:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:49.585 19:46:36 -- scripts/common.sh@354 -- # echo 2 00:25:49.585 19:46:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:49.585 19:46:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:49.585 19:46:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:49.585 19:46:36 -- scripts/common.sh@367 -- # return 0 00:25:49.585 19:46:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:49.585 19:46:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:49.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:49.585 --rc genhtml_branch_coverage=1 00:25:49.585 --rc genhtml_function_coverage=1 00:25:49.585 --rc genhtml_legend=1 00:25:49.585 --rc geninfo_all_blocks=1 00:25:49.585 --rc geninfo_unexecuted_blocks=1 00:25:49.585 00:25:49.585 ' 00:25:49.585 19:46:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:49.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:49.585 --rc genhtml_branch_coverage=1 00:25:49.585 --rc genhtml_function_coverage=1 00:25:49.585 --rc genhtml_legend=1 00:25:49.585 --rc geninfo_all_blocks=1 00:25:49.585 --rc geninfo_unexecuted_blocks=1 00:25:49.585 00:25:49.585 ' 00:25:49.585 19:46:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:49.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:49.585 --rc genhtml_branch_coverage=1 00:25:49.585 --rc genhtml_function_coverage=1 00:25:49.585 --rc genhtml_legend=1 00:25:49.585 --rc geninfo_all_blocks=1 00:25:49.585 --rc geninfo_unexecuted_blocks=1 00:25:49.585 00:25:49.585 ' 00:25:49.585 19:46:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:49.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:49.585 --rc genhtml_branch_coverage=1 00:25:49.585 --rc genhtml_function_coverage=1 00:25:49.585 --rc genhtml_legend=1 00:25:49.585 --rc geninfo_all_blocks=1 00:25:49.585 --rc geninfo_unexecuted_blocks=1 00:25:49.585 00:25:49.585 ' 00:25:49.585 19:46:36 -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:49.585 19:46:36 -- nvmf/common.sh@7 -- # uname -s 00:25:49.585 19:46:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:49.585 19:46:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:49.585 19:46:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:49.585 19:46:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:49.585 19:46:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:49.585 19:46:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:49.585 19:46:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:49.585 19:46:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:49.585 19:46:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:49.585 19:46:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:49.585 19:46:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:25:49.585 19:46:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:25:49.585 19:46:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:49.585 19:46:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:49.585 19:46:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:49.585 19:46:36 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:49.585 19:46:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:49.585 19:46:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:49.585 19:46:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:49.586 19:46:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.586 19:46:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.586 19:46:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.586 19:46:36 -- paths/export.sh@5 -- # export PATH 00:25:49.586 19:46:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.586 19:46:36 -- nvmf/common.sh@46 -- # : 0 00:25:49.586 19:46:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:49.586 19:46:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:49.586 19:46:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:49.586 19:46:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:49.586 19:46:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:49.586 19:46:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:49.586 19:46:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:49.586 19:46:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:49.586 19:46:36 -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:49.586 19:46:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:49.586 19:46:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:49.586 19:46:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:49.586 19:46:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.586 19:46:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.586 19:46:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.586 19:46:36 -- paths/export.sh@5 -- # export PATH 00:25:49.586 19:46:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.586 19:46:36 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:49.586 19:46:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:49.586 19:46:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:49.586 19:46:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:49.586 19:46:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:49.586 19:46:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:49.586 19:46:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:49.586 19:46:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:49.586 19:46:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:49.586 19:46:36 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:49.586 19:46:36 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:49.586 19:46:36 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:49.586 19:46:36 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:49.586 19:46:36 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:49.586 19:46:36 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:49.586 19:46:36 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:49.586 19:46:36 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:49.586 19:46:36 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:49.586 19:46:36 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:49.586 19:46:36 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:49.586 19:46:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:49.586 19:46:36 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:49.586 19:46:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:49.586 19:46:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:49.586 19:46:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:49.586 19:46:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:49.586 19:46:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:49.586 19:46:36 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:49.586 19:46:36 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:49.586 Cannot find device "nvmf_tgt_br" 00:25:49.586 19:46:36 -- nvmf/common.sh@154 -- # true 00:25:49.586 19:46:36 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:49.586 Cannot find device "nvmf_tgt_br2" 00:25:49.586 19:46:36 -- nvmf/common.sh@155 -- # true 00:25:49.586 19:46:36 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:49.586 19:46:36 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:49.586 Cannot find device "nvmf_tgt_br" 00:25:49.586 19:46:36 -- nvmf/common.sh@157 -- # true 00:25:49.586 19:46:36 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:49.586 Cannot find device "nvmf_tgt_br2" 00:25:49.586 19:46:36 -- nvmf/common.sh@158 -- # true 00:25:49.586 19:46:36 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:49.586 19:46:36 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:49.586 19:46:36 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:49.586 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:49.586 19:46:36 -- nvmf/common.sh@161 -- # true 00:25:49.586 19:46:36 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:49.586 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:49.586 19:46:36 -- nvmf/common.sh@162 -- # true 00:25:49.586 19:46:36 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:49.586 19:46:36 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:49.586 19:46:36 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:49.586 19:46:36 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:49.586 19:46:36 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:49.586 19:46:36 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:49.845 19:46:36 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:49.845 19:46:36 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:49.845 19:46:36 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:49.845 19:46:36 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:49.845 19:46:36 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:49.845 19:46:36 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:49.845 19:46:36 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:49.845 19:46:36 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:49.845 19:46:36 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:49.845 19:46:36 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:49.845 19:46:36 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:49.845 19:46:36 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:49.845 19:46:36 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:49.845 19:46:36 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:49.845 19:46:36 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:49.845 19:46:36 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:49.845 19:46:36 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:49.845 19:46:36 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:49.845 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:49.845 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:25:49.845 00:25:49.845 --- 10.0.0.2 ping statistics --- 00:25:49.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.845 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:25:49.845 19:46:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:49.845 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:49.845 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:25:49.845 00:25:49.845 --- 10.0.0.3 ping statistics --- 00:25:49.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.845 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:25:49.845 19:46:36 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:49.845 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:49.845 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:25:49.845 00:25:49.845 --- 10.0.0.1 ping statistics --- 00:25:49.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.845 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:25:49.845 19:46:36 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:49.845 19:46:36 -- nvmf/common.sh@421 -- # return 0 00:25:49.845 19:46:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:49.845 19:46:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:49.845 19:46:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:49.845 19:46:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:49.845 19:46:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:49.845 19:46:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:49.845 19:46:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:49.845 19:46:36 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:49.845 19:46:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:49.845 19:46:36 -- common/autotest_common.sh@10 -- # set +x 00:25:49.845 19:46:36 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:49.845 19:46:36 -- common/autotest_common.sh@1519 -- # bdfs=() 00:25:49.845 19:46:36 -- common/autotest_common.sh@1519 -- # local bdfs 00:25:49.845 19:46:36 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:25:49.845 19:46:36 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:25:49.845 19:46:36 -- common/autotest_common.sh@1508 -- # bdfs=() 00:25:49.845 19:46:36 -- common/autotest_common.sh@1508 -- # local bdfs 00:25:49.845 19:46:36 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:49.845 19:46:36 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:49.845 19:46:36 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:25:49.845 19:46:36 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:25:49.845 19:46:36 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:25:49.845 19:46:36 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:25:49.845 19:46:36 -- target/identify_passthru.sh@16 -- # bdf=0000:00:06.0 00:25:49.845 19:46:36 -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:06.0 ']' 00:25:49.845 19:46:36 -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:25:49.845 19:46:36 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:49.845 19:46:36 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:50.104 19:46:36 -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:25:50.104 19:46:36 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:50.104 19:46:36 -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:25:50.104 19:46:36 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:50.363 19:46:37 -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:25:50.363 19:46:37 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:50.363 19:46:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:50.363 19:46:37 -- common/autotest_common.sh@10 -- # set +x 00:25:50.363 19:46:37 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:50.363 19:46:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:50.363 19:46:37 -- common/autotest_common.sh@10 -- # set +x 00:25:50.363 19:46:37 -- target/identify_passthru.sh@31 -- # nvmfpid=101635 00:25:50.364 19:46:37 -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:50.364 19:46:37 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:50.364 19:46:37 -- target/identify_passthru.sh@35 -- # waitforlisten 101635 00:25:50.364 19:46:37 -- common/autotest_common.sh@829 -- # '[' -z 101635 ']' 00:25:50.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:50.364 19:46:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:50.364 19:46:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:50.364 19:46:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:50.364 19:46:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:50.364 19:46:37 -- common/autotest_common.sh@10 -- # set +x 00:25:50.364 [2024-12-15 19:46:37.158250] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:50.364 [2024-12-15 19:46:37.158344] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:50.622 [2024-12-15 19:46:37.296468] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:50.622 [2024-12-15 19:46:37.379064] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:50.622 [2024-12-15 19:46:37.379225] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:50.622 [2024-12-15 19:46:37.379242] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:50.622 [2024-12-15 19:46:37.379253] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:50.622 [2024-12-15 19:46:37.379325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:50.622 [2024-12-15 19:46:37.379465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:50.622 [2024-12-15 19:46:37.381249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:50.622 [2024-12-15 19:46:37.381297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:51.559 19:46:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:51.559 19:46:38 -- common/autotest_common.sh@862 -- # return 0 00:25:51.559 19:46:38 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:51.559 19:46:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.559 19:46:38 -- common/autotest_common.sh@10 -- # set +x 00:25:51.559 19:46:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.559 19:46:38 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:51.559 19:46:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.559 19:46:38 -- common/autotest_common.sh@10 -- # set +x 00:25:51.559 [2024-12-15 19:46:38.338049] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:51.559 19:46:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.559 19:46:38 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:51.559 19:46:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.559 19:46:38 -- common/autotest_common.sh@10 -- # set +x 00:25:51.559 [2024-12-15 19:46:38.352448] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:51.559 19:46:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.559 19:46:38 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:51.559 19:46:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:51.559 19:46:38 -- common/autotest_common.sh@10 -- # set +x 00:25:51.559 19:46:38 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:25:51.559 19:46:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.559 19:46:38 -- common/autotest_common.sh@10 -- # set +x 00:25:51.817 Nvme0n1 00:25:51.817 19:46:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.817 19:46:38 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:51.817 19:46:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.817 19:46:38 -- common/autotest_common.sh@10 -- # set +x 00:25:51.817 19:46:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.817 19:46:38 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:51.817 19:46:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.817 19:46:38 -- common/autotest_common.sh@10 -- # set +x 00:25:51.817 19:46:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.817 19:46:38 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:51.817 19:46:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.817 19:46:38 -- common/autotest_common.sh@10 -- # set +x 00:25:51.817 [2024-12-15 19:46:38.495600] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:51.817 19:46:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.817 19:46:38 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:51.817 19:46:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.817 19:46:38 -- common/autotest_common.sh@10 -- # set +x 00:25:51.817 [2024-12-15 19:46:38.503343] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:51.817 [ 00:25:51.817 { 00:25:51.817 "allow_any_host": true, 00:25:51.817 "hosts": [], 00:25:51.817 "listen_addresses": [], 00:25:51.817 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:51.817 "subtype": "Discovery" 00:25:51.817 }, 00:25:51.817 { 00:25:51.817 "allow_any_host": true, 00:25:51.818 "hosts": [], 00:25:51.818 "listen_addresses": [ 00:25:51.818 { 00:25:51.818 "adrfam": "IPv4", 00:25:51.818 "traddr": "10.0.0.2", 00:25:51.818 "transport": "TCP", 00:25:51.818 "trsvcid": "4420", 00:25:51.818 "trtype": "TCP" 00:25:51.818 } 00:25:51.818 ], 00:25:51.818 "max_cntlid": 65519, 00:25:51.818 "max_namespaces": 1, 00:25:51.818 "min_cntlid": 1, 00:25:51.818 "model_number": "SPDK bdev Controller", 00:25:51.818 "namespaces": [ 00:25:51.818 { 00:25:51.818 "bdev_name": "Nvme0n1", 00:25:51.818 "name": "Nvme0n1", 00:25:51.818 "nguid": "AA48AB6E564C47289D24A27FF40F19B7", 00:25:51.818 "nsid": 1, 00:25:51.818 "uuid": "aa48ab6e-564c-4728-9d24-a27ff40f19b7" 00:25:51.818 } 00:25:51.818 ], 00:25:51.818 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:51.818 "serial_number": "SPDK00000000000001", 00:25:51.818 "subtype": "NVMe" 00:25:51.818 } 00:25:51.818 ] 00:25:51.818 19:46:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.818 19:46:38 -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:51.818 19:46:38 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:51.818 19:46:38 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:52.076 19:46:38 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:25:52.076 19:46:38 -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:52.076 19:46:38 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:52.076 19:46:38 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:52.076 19:46:38 -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:25:52.076 19:46:38 -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:25:52.076 19:46:38 -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:25:52.076 19:46:38 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:52.076 19:46:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.076 19:46:38 -- common/autotest_common.sh@10 -- # set +x 00:25:52.076 19:46:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.076 19:46:38 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:52.076 19:46:38 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:52.076 19:46:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:52.076 19:46:38 -- nvmf/common.sh@116 -- # sync 00:25:52.334 19:46:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:52.334 19:46:39 -- nvmf/common.sh@119 -- # set +e 00:25:52.334 19:46:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:52.334 19:46:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:52.334 rmmod nvme_tcp 00:25:52.334 rmmod nvme_fabrics 00:25:52.334 rmmod nvme_keyring 00:25:52.334 19:46:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:52.334 19:46:39 -- nvmf/common.sh@123 -- # set -e 00:25:52.334 19:46:39 -- nvmf/common.sh@124 -- # return 0 00:25:52.334 19:46:39 -- nvmf/common.sh@477 -- # '[' -n 101635 ']' 00:25:52.334 19:46:39 -- nvmf/common.sh@478 -- # killprocess 101635 00:25:52.334 19:46:39 -- common/autotest_common.sh@936 -- # '[' -z 101635 ']' 00:25:52.334 19:46:39 -- common/autotest_common.sh@940 -- # kill -0 101635 00:25:52.334 19:46:39 -- common/autotest_common.sh@941 -- # uname 00:25:52.334 19:46:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:52.334 19:46:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 101635 00:25:52.334 19:46:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:52.334 19:46:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:52.334 19:46:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 101635' 00:25:52.334 killing process with pid 101635 00:25:52.334 19:46:39 -- common/autotest_common.sh@955 -- # kill 101635 00:25:52.334 [2024-12-15 19:46:39.114054] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:52.334 19:46:39 -- common/autotest_common.sh@960 -- # wait 101635 00:25:52.593 19:46:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:52.593 19:46:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:52.593 19:46:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:52.593 19:46:39 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:52.593 19:46:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:52.593 19:46:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:52.593 19:46:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:52.593 19:46:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.593 19:46:39 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:52.593 00:25:52.593 real 0m3.323s 00:25:52.593 user 0m8.388s 00:25:52.593 sys 0m0.930s 00:25:52.593 19:46:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:52.593 19:46:39 -- common/autotest_common.sh@10 -- # set +x 00:25:52.593 ************************************ 00:25:52.593 END TEST nvmf_identify_passthru 00:25:52.593 ************************************ 00:25:52.593 19:46:39 -- spdk/autotest.sh@287 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:52.593 19:46:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:52.593 19:46:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:52.593 19:46:39 -- common/autotest_common.sh@10 -- # set +x 00:25:52.593 ************************************ 00:25:52.593 START TEST nvmf_dif 00:25:52.593 ************************************ 00:25:52.593 19:46:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:52.852 * Looking for test storage... 00:25:52.852 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:52.852 19:46:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:52.852 19:46:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:52.852 19:46:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:52.852 19:46:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:52.852 19:46:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:52.852 19:46:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:52.852 19:46:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:52.852 19:46:39 -- scripts/common.sh@335 -- # IFS=.-: 00:25:52.852 19:46:39 -- scripts/common.sh@335 -- # read -ra ver1 00:25:52.852 19:46:39 -- scripts/common.sh@336 -- # IFS=.-: 00:25:52.852 19:46:39 -- scripts/common.sh@336 -- # read -ra ver2 00:25:52.852 19:46:39 -- scripts/common.sh@337 -- # local 'op=<' 00:25:52.852 19:46:39 -- scripts/common.sh@339 -- # ver1_l=2 00:25:52.852 19:46:39 -- scripts/common.sh@340 -- # ver2_l=1 00:25:52.852 19:46:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:52.852 19:46:39 -- scripts/common.sh@343 -- # case "$op" in 00:25:52.852 19:46:39 -- scripts/common.sh@344 -- # : 1 00:25:52.852 19:46:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:52.852 19:46:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:52.852 19:46:39 -- scripts/common.sh@364 -- # decimal 1 00:25:52.852 19:46:39 -- scripts/common.sh@352 -- # local d=1 00:25:52.852 19:46:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:52.852 19:46:39 -- scripts/common.sh@354 -- # echo 1 00:25:52.852 19:46:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:52.852 19:46:39 -- scripts/common.sh@365 -- # decimal 2 00:25:52.852 19:46:39 -- scripts/common.sh@352 -- # local d=2 00:25:52.852 19:46:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:52.852 19:46:39 -- scripts/common.sh@354 -- # echo 2 00:25:52.852 19:46:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:52.852 19:46:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:52.852 19:46:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:52.852 19:46:39 -- scripts/common.sh@367 -- # return 0 00:25:52.852 19:46:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:52.852 19:46:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:52.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:52.852 --rc genhtml_branch_coverage=1 00:25:52.852 --rc genhtml_function_coverage=1 00:25:52.852 --rc genhtml_legend=1 00:25:52.852 --rc geninfo_all_blocks=1 00:25:52.852 --rc geninfo_unexecuted_blocks=1 00:25:52.852 00:25:52.852 ' 00:25:52.852 19:46:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:52.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:52.852 --rc genhtml_branch_coverage=1 00:25:52.852 --rc genhtml_function_coverage=1 00:25:52.852 --rc genhtml_legend=1 00:25:52.852 --rc geninfo_all_blocks=1 00:25:52.852 --rc geninfo_unexecuted_blocks=1 00:25:52.852 00:25:52.852 ' 00:25:52.852 19:46:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:52.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:52.852 --rc genhtml_branch_coverage=1 00:25:52.852 --rc genhtml_function_coverage=1 00:25:52.852 --rc genhtml_legend=1 00:25:52.852 --rc geninfo_all_blocks=1 00:25:52.852 --rc geninfo_unexecuted_blocks=1 00:25:52.852 00:25:52.852 ' 00:25:52.852 19:46:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:52.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:52.852 --rc genhtml_branch_coverage=1 00:25:52.852 --rc genhtml_function_coverage=1 00:25:52.852 --rc genhtml_legend=1 00:25:52.852 --rc geninfo_all_blocks=1 00:25:52.852 --rc geninfo_unexecuted_blocks=1 00:25:52.852 00:25:52.852 ' 00:25:52.852 19:46:39 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:52.852 19:46:39 -- nvmf/common.sh@7 -- # uname -s 00:25:52.852 19:46:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:52.852 19:46:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:52.852 19:46:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:52.852 19:46:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:52.852 19:46:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:52.852 19:46:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:52.852 19:46:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:52.852 19:46:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:52.852 19:46:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:52.852 19:46:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:52.852 19:46:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:25:52.852 19:46:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:25:52.852 19:46:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:52.852 19:46:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:52.852 19:46:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:52.852 19:46:39 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:52.852 19:46:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:52.852 19:46:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:52.852 19:46:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:52.852 19:46:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.853 19:46:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.853 19:46:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.853 19:46:39 -- paths/export.sh@5 -- # export PATH 00:25:52.853 19:46:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.853 19:46:39 -- nvmf/common.sh@46 -- # : 0 00:25:52.853 19:46:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:52.853 19:46:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:52.853 19:46:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:52.853 19:46:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:52.853 19:46:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:52.853 19:46:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:52.853 19:46:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:52.853 19:46:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:52.853 19:46:39 -- target/dif.sh@15 -- # NULL_META=16 00:25:52.853 19:46:39 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:52.853 19:46:39 -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:52.853 19:46:39 -- target/dif.sh@15 -- # NULL_DIF=1 00:25:52.853 19:46:39 -- target/dif.sh@135 -- # nvmftestinit 00:25:52.853 19:46:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:52.853 19:46:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:52.853 19:46:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:52.853 19:46:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:52.853 19:46:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:52.853 19:46:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:52.853 19:46:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:52.853 19:46:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.853 19:46:39 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:52.853 19:46:39 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:52.853 19:46:39 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:52.853 19:46:39 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:52.853 19:46:39 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:52.853 19:46:39 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:52.853 19:46:39 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:52.853 19:46:39 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:52.853 19:46:39 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:52.853 19:46:39 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:52.853 19:46:39 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:52.853 19:46:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:52.853 19:46:39 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:52.853 19:46:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:52.853 19:46:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:52.853 19:46:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:52.853 19:46:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:52.853 19:46:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:52.853 19:46:39 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:52.853 19:46:39 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:52.853 Cannot find device "nvmf_tgt_br" 00:25:52.853 19:46:39 -- nvmf/common.sh@154 -- # true 00:25:52.853 19:46:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:52.853 Cannot find device "nvmf_tgt_br2" 00:25:52.853 19:46:39 -- nvmf/common.sh@155 -- # true 00:25:52.853 19:46:39 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:52.853 19:46:39 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:52.853 Cannot find device "nvmf_tgt_br" 00:25:52.853 19:46:39 -- nvmf/common.sh@157 -- # true 00:25:52.853 19:46:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:52.853 Cannot find device "nvmf_tgt_br2" 00:25:52.853 19:46:39 -- nvmf/common.sh@158 -- # true 00:25:52.853 19:46:39 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:53.111 19:46:39 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:53.111 19:46:39 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:53.111 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:53.111 19:46:39 -- nvmf/common.sh@161 -- # true 00:25:53.111 19:46:39 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:53.111 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:53.111 19:46:39 -- nvmf/common.sh@162 -- # true 00:25:53.111 19:46:39 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:53.111 19:46:39 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:53.111 19:46:39 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:53.111 19:46:39 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:53.111 19:46:39 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:53.111 19:46:39 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:53.111 19:46:39 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:53.111 19:46:39 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:53.111 19:46:39 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:53.111 19:46:39 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:53.111 19:46:39 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:53.111 19:46:39 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:53.112 19:46:39 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:53.112 19:46:39 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:53.112 19:46:39 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:53.112 19:46:39 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:53.112 19:46:39 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:53.112 19:46:39 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:53.112 19:46:39 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:53.112 19:46:39 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:53.112 19:46:39 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:53.112 19:46:39 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:53.112 19:46:40 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:53.370 19:46:40 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:53.370 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:53.370 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:25:53.370 00:25:53.370 --- 10.0.0.2 ping statistics --- 00:25:53.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.370 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:25:53.370 19:46:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:53.370 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:53.370 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:25:53.370 00:25:53.370 --- 10.0.0.3 ping statistics --- 00:25:53.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.370 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:25:53.370 19:46:40 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:53.370 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:53.370 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:25:53.370 00:25:53.370 --- 10.0.0.1 ping statistics --- 00:25:53.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.370 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:25:53.370 19:46:40 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:53.370 19:46:40 -- nvmf/common.sh@421 -- # return 0 00:25:53.370 19:46:40 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:25:53.370 19:46:40 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:53.629 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:53.629 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:53.629 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:53.629 19:46:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:53.629 19:46:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:53.629 19:46:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:53.629 19:46:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:53.629 19:46:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:53.629 19:46:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:53.629 19:46:40 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:53.629 19:46:40 -- target/dif.sh@137 -- # nvmfappstart 00:25:53.629 19:46:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:53.629 19:46:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:53.629 19:46:40 -- common/autotest_common.sh@10 -- # set +x 00:25:53.629 19:46:40 -- nvmf/common.sh@469 -- # nvmfpid=101997 00:25:53.629 19:46:40 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:53.629 19:46:40 -- nvmf/common.sh@470 -- # waitforlisten 101997 00:25:53.629 19:46:40 -- common/autotest_common.sh@829 -- # '[' -z 101997 ']' 00:25:53.629 19:46:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:53.629 19:46:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:53.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:53.629 19:46:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:53.629 19:46:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:53.629 19:46:40 -- common/autotest_common.sh@10 -- # set +x 00:25:53.888 [2024-12-15 19:46:40.557644] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:53.888 [2024-12-15 19:46:40.557749] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:53.888 [2024-12-15 19:46:40.701664] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.147 [2024-12-15 19:46:40.805626] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:54.147 [2024-12-15 19:46:40.805809] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:54.147 [2024-12-15 19:46:40.805843] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:54.147 [2024-12-15 19:46:40.805857] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:54.147 [2024-12-15 19:46:40.805893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:54.742 19:46:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:54.742 19:46:41 -- common/autotest_common.sh@862 -- # return 0 00:25:54.742 19:46:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:54.742 19:46:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:54.742 19:46:41 -- common/autotest_common.sh@10 -- # set +x 00:25:55.001 19:46:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:55.001 19:46:41 -- target/dif.sh@139 -- # create_transport 00:25:55.001 19:46:41 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:55.001 19:46:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.001 19:46:41 -- common/autotest_common.sh@10 -- # set +x 00:25:55.001 [2024-12-15 19:46:41.668008] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:55.001 19:46:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.001 19:46:41 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:55.001 19:46:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:55.001 19:46:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:55.001 19:46:41 -- common/autotest_common.sh@10 -- # set +x 00:25:55.001 ************************************ 00:25:55.001 START TEST fio_dif_1_default 00:25:55.001 ************************************ 00:25:55.001 19:46:41 -- common/autotest_common.sh@1114 -- # fio_dif_1 00:25:55.001 19:46:41 -- target/dif.sh@86 -- # create_subsystems 0 00:25:55.001 19:46:41 -- target/dif.sh@28 -- # local sub 00:25:55.001 19:46:41 -- target/dif.sh@30 -- # for sub in "$@" 00:25:55.001 19:46:41 -- target/dif.sh@31 -- # create_subsystem 0 00:25:55.001 19:46:41 -- target/dif.sh@18 -- # local sub_id=0 00:25:55.001 19:46:41 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:55.001 19:46:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.001 19:46:41 -- common/autotest_common.sh@10 -- # set +x 00:25:55.001 bdev_null0 00:25:55.001 19:46:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.001 19:46:41 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:55.001 19:46:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.001 19:46:41 -- common/autotest_common.sh@10 -- # set +x 00:25:55.001 19:46:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.001 19:46:41 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:55.001 19:46:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.001 19:46:41 -- common/autotest_common.sh@10 -- # set +x 00:25:55.001 19:46:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.001 19:46:41 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:55.001 19:46:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.001 19:46:41 -- common/autotest_common.sh@10 -- # set +x 00:25:55.001 [2024-12-15 19:46:41.712154] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:55.001 19:46:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.001 19:46:41 -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:55.001 19:46:41 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:55.001 19:46:41 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:55.001 19:46:41 -- nvmf/common.sh@520 -- # config=() 00:25:55.001 19:46:41 -- nvmf/common.sh@520 -- # local subsystem config 00:25:55.001 19:46:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:55.001 19:46:41 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:55.001 19:46:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:55.001 { 00:25:55.001 "params": { 00:25:55.001 "name": "Nvme$subsystem", 00:25:55.001 "trtype": "$TEST_TRANSPORT", 00:25:55.001 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:55.001 "adrfam": "ipv4", 00:25:55.001 "trsvcid": "$NVMF_PORT", 00:25:55.001 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:55.001 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:55.001 "hdgst": ${hdgst:-false}, 00:25:55.001 "ddgst": ${ddgst:-false} 00:25:55.001 }, 00:25:55.001 "method": "bdev_nvme_attach_controller" 00:25:55.001 } 00:25:55.001 EOF 00:25:55.001 )") 00:25:55.001 19:46:41 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:55.001 19:46:41 -- target/dif.sh@82 -- # gen_fio_conf 00:25:55.001 19:46:41 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:25:55.001 19:46:41 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:55.001 19:46:41 -- target/dif.sh@54 -- # local file 00:25:55.001 19:46:41 -- common/autotest_common.sh@1328 -- # local sanitizers 00:25:55.001 19:46:41 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:55.001 19:46:41 -- target/dif.sh@56 -- # cat 00:25:55.001 19:46:41 -- common/autotest_common.sh@1330 -- # shift 00:25:55.001 19:46:41 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:25:55.001 19:46:41 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:55.001 19:46:41 -- nvmf/common.sh@542 -- # cat 00:25:55.001 19:46:41 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:55.001 19:46:41 -- target/dif.sh@72 -- # (( file <= files )) 00:25:55.001 19:46:41 -- common/autotest_common.sh@1334 -- # grep libasan 00:25:55.001 19:46:41 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:55.001 19:46:41 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:55.001 19:46:41 -- nvmf/common.sh@544 -- # jq . 00:25:55.001 19:46:41 -- nvmf/common.sh@545 -- # IFS=, 00:25:55.001 19:46:41 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:55.001 "params": { 00:25:55.001 "name": "Nvme0", 00:25:55.001 "trtype": "tcp", 00:25:55.001 "traddr": "10.0.0.2", 00:25:55.001 "adrfam": "ipv4", 00:25:55.001 "trsvcid": "4420", 00:25:55.001 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:55.001 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:55.001 "hdgst": false, 00:25:55.001 "ddgst": false 00:25:55.001 }, 00:25:55.001 "method": "bdev_nvme_attach_controller" 00:25:55.001 }' 00:25:55.001 19:46:41 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:55.001 19:46:41 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:55.001 19:46:41 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:55.001 19:46:41 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:55.001 19:46:41 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:25:55.001 19:46:41 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:55.001 19:46:41 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:55.001 19:46:41 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:55.001 19:46:41 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:55.001 19:46:41 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:55.260 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:55.260 fio-3.35 00:25:55.260 Starting 1 thread 00:25:55.520 [2024-12-15 19:46:42.346540] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:55.520 [2024-12-15 19:46:42.346633] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:07.723 00:26:07.723 filename0: (groupid=0, jobs=1): err= 0: pid=102079: Sun Dec 15 19:46:52 2024 00:26:07.723 read: IOPS=1777, BW=7110KiB/s (7281kB/s)(69.5MiB/10009msec) 00:26:07.723 slat (nsec): min=5787, max=87282, avg=7260.49, stdev=2940.03 00:26:07.723 clat (usec): min=337, max=42437, avg=2228.52, stdev=8447.62 00:26:07.723 lat (usec): min=343, max=42454, avg=2235.78, stdev=8447.78 00:26:07.723 clat percentiles (usec): 00:26:07.724 | 1.00th=[ 347], 5.00th=[ 351], 10.00th=[ 359], 20.00th=[ 363], 00:26:07.724 | 30.00th=[ 371], 40.00th=[ 375], 50.00th=[ 383], 60.00th=[ 388], 00:26:07.724 | 70.00th=[ 396], 80.00th=[ 408], 90.00th=[ 433], 95.00th=[ 486], 00:26:07.724 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:26:07.724 | 99.99th=[42206] 00:26:07.724 bw ( KiB/s): min= 2624, max=25019, per=99.98%, avg=7109.25, stdev=5772.32, samples=20 00:26:07.724 iops : min= 656, max= 6254, avg=1777.25, stdev=1442.95, samples=20 00:26:07.724 lat (usec) : 500=95.26%, 750=0.17% 00:26:07.724 lat (msec) : 10=0.02%, 50=4.54% 00:26:07.724 cpu : usr=91.10%, sys=7.90%, ctx=20, majf=0, minf=8 00:26:07.724 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:07.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.724 issued rwts: total=17792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.724 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:07.724 00:26:07.724 Run status group 0 (all jobs): 00:26:07.724 READ: bw=7110KiB/s (7281kB/s), 7110KiB/s-7110KiB/s (7281kB/s-7281kB/s), io=69.5MiB (72.9MB), run=10009-10009msec 00:26:07.724 19:46:52 -- target/dif.sh@88 -- # destroy_subsystems 0 00:26:07.724 19:46:52 -- target/dif.sh@43 -- # local sub 00:26:07.724 19:46:52 -- target/dif.sh@45 -- # for sub in "$@" 00:26:07.724 19:46:52 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:07.724 19:46:52 -- target/dif.sh@36 -- # local sub_id=0 00:26:07.724 19:46:52 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:07.724 19:46:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.724 19:46:52 -- common/autotest_common.sh@10 -- # set +x 00:26:07.724 19:46:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.724 19:46:52 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:07.724 19:46:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.724 19:46:52 -- common/autotest_common.sh@10 -- # set +x 00:26:07.724 19:46:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.724 00:26:07.724 real 0m11.016s 00:26:07.724 user 0m9.758s 00:26:07.724 sys 0m1.069s 00:26:07.724 19:46:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:07.724 19:46:52 -- common/autotest_common.sh@10 -- # set +x 00:26:07.724 ************************************ 00:26:07.724 END TEST fio_dif_1_default 00:26:07.724 ************************************ 00:26:07.724 19:46:52 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:26:07.724 19:46:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:07.724 19:46:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:07.724 19:46:52 -- common/autotest_common.sh@10 -- # set +x 00:26:07.724 ************************************ 00:26:07.724 START TEST fio_dif_1_multi_subsystems 00:26:07.724 ************************************ 00:26:07.724 19:46:52 -- common/autotest_common.sh@1114 -- # fio_dif_1_multi_subsystems 00:26:07.724 19:46:52 -- target/dif.sh@92 -- # local files=1 00:26:07.724 19:46:52 -- target/dif.sh@94 -- # create_subsystems 0 1 00:26:07.724 19:46:52 -- target/dif.sh@28 -- # local sub 00:26:07.724 19:46:52 -- target/dif.sh@30 -- # for sub in "$@" 00:26:07.724 19:46:52 -- target/dif.sh@31 -- # create_subsystem 0 00:26:07.724 19:46:52 -- target/dif.sh@18 -- # local sub_id=0 00:26:07.724 19:46:52 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:07.724 19:46:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.724 19:46:52 -- common/autotest_common.sh@10 -- # set +x 00:26:07.724 bdev_null0 00:26:07.724 19:46:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.724 19:46:52 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:07.724 19:46:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.724 19:46:52 -- common/autotest_common.sh@10 -- # set +x 00:26:07.724 19:46:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.724 19:46:52 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:07.724 19:46:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.724 19:46:52 -- common/autotest_common.sh@10 -- # set +x 00:26:07.724 19:46:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.724 19:46:52 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:07.724 19:46:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.724 19:46:52 -- common/autotest_common.sh@10 -- # set +x 00:26:07.724 [2024-12-15 19:46:52.786979] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:07.724 19:46:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.724 19:46:52 -- target/dif.sh@30 -- # for sub in "$@" 00:26:07.724 19:46:52 -- target/dif.sh@31 -- # create_subsystem 1 00:26:07.724 19:46:52 -- target/dif.sh@18 -- # local sub_id=1 00:26:07.724 19:46:52 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:07.724 19:46:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.724 19:46:52 -- common/autotest_common.sh@10 -- # set +x 00:26:07.724 bdev_null1 00:26:07.724 19:46:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.724 19:46:52 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:07.724 19:46:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.724 19:46:52 -- common/autotest_common.sh@10 -- # set +x 00:26:07.724 19:46:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.724 19:46:52 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:07.724 19:46:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.724 19:46:52 -- common/autotest_common.sh@10 -- # set +x 00:26:07.724 19:46:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.724 19:46:52 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:07.724 19:46:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.724 19:46:52 -- common/autotest_common.sh@10 -- # set +x 00:26:07.724 19:46:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.724 19:46:52 -- target/dif.sh@95 -- # fio /dev/fd/62 00:26:07.724 19:46:52 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:26:07.724 19:46:52 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:07.724 19:46:52 -- nvmf/common.sh@520 -- # config=() 00:26:07.724 19:46:52 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:07.724 19:46:52 -- nvmf/common.sh@520 -- # local subsystem config 00:26:07.724 19:46:52 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:07.724 19:46:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:07.724 19:46:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:07.724 { 00:26:07.724 "params": { 00:26:07.724 "name": "Nvme$subsystem", 00:26:07.724 "trtype": "$TEST_TRANSPORT", 00:26:07.724 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:07.724 "adrfam": "ipv4", 00:26:07.724 "trsvcid": "$NVMF_PORT", 00:26:07.724 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:07.724 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:07.724 "hdgst": ${hdgst:-false}, 00:26:07.724 "ddgst": ${ddgst:-false} 00:26:07.724 }, 00:26:07.724 "method": "bdev_nvme_attach_controller" 00:26:07.724 } 00:26:07.724 EOF 00:26:07.724 )") 00:26:07.724 19:46:52 -- target/dif.sh@82 -- # gen_fio_conf 00:26:07.724 19:46:52 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:07.724 19:46:52 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:07.724 19:46:52 -- target/dif.sh@54 -- # local file 00:26:07.725 19:46:52 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:07.725 19:46:52 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:07.725 19:46:52 -- target/dif.sh@56 -- # cat 00:26:07.725 19:46:52 -- common/autotest_common.sh@1330 -- # shift 00:26:07.725 19:46:52 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:07.725 19:46:52 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:07.725 19:46:52 -- nvmf/common.sh@542 -- # cat 00:26:07.725 19:46:52 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:07.725 19:46:52 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:07.725 19:46:52 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:07.725 19:46:52 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:07.725 19:46:52 -- target/dif.sh@72 -- # (( file <= files )) 00:26:07.725 19:46:52 -- target/dif.sh@73 -- # cat 00:26:07.725 19:46:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:07.725 19:46:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:07.725 { 00:26:07.725 "params": { 00:26:07.725 "name": "Nvme$subsystem", 00:26:07.725 "trtype": "$TEST_TRANSPORT", 00:26:07.725 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:07.725 "adrfam": "ipv4", 00:26:07.725 "trsvcid": "$NVMF_PORT", 00:26:07.725 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:07.725 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:07.725 "hdgst": ${hdgst:-false}, 00:26:07.725 "ddgst": ${ddgst:-false} 00:26:07.725 }, 00:26:07.725 "method": "bdev_nvme_attach_controller" 00:26:07.725 } 00:26:07.725 EOF 00:26:07.725 )") 00:26:07.725 19:46:52 -- nvmf/common.sh@542 -- # cat 00:26:07.725 19:46:52 -- target/dif.sh@72 -- # (( file++ )) 00:26:07.725 19:46:52 -- target/dif.sh@72 -- # (( file <= files )) 00:26:07.725 19:46:52 -- nvmf/common.sh@544 -- # jq . 00:26:07.725 19:46:52 -- nvmf/common.sh@545 -- # IFS=, 00:26:07.725 19:46:52 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:07.725 "params": { 00:26:07.725 "name": "Nvme0", 00:26:07.725 "trtype": "tcp", 00:26:07.725 "traddr": "10.0.0.2", 00:26:07.725 "adrfam": "ipv4", 00:26:07.725 "trsvcid": "4420", 00:26:07.725 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:07.725 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:07.725 "hdgst": false, 00:26:07.725 "ddgst": false 00:26:07.725 }, 00:26:07.725 "method": "bdev_nvme_attach_controller" 00:26:07.725 },{ 00:26:07.725 "params": { 00:26:07.725 "name": "Nvme1", 00:26:07.725 "trtype": "tcp", 00:26:07.725 "traddr": "10.0.0.2", 00:26:07.725 "adrfam": "ipv4", 00:26:07.725 "trsvcid": "4420", 00:26:07.725 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:07.725 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:07.725 "hdgst": false, 00:26:07.725 "ddgst": false 00:26:07.725 }, 00:26:07.725 "method": "bdev_nvme_attach_controller" 00:26:07.725 }' 00:26:07.725 19:46:52 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:07.725 19:46:52 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:07.725 19:46:52 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:07.725 19:46:52 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:07.725 19:46:52 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:07.725 19:46:52 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:07.725 19:46:52 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:07.725 19:46:52 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:07.725 19:46:52 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:07.725 19:46:52 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:07.725 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:07.725 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:07.725 fio-3.35 00:26:07.725 Starting 2 threads 00:26:07.725 [2024-12-15 19:46:53.580888] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:07.725 [2024-12-15 19:46:53.580956] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:17.732 00:26:17.732 filename0: (groupid=0, jobs=1): err= 0: pid=102239: Sun Dec 15 19:47:03 2024 00:26:17.732 read: IOPS=200, BW=802KiB/s (821kB/s)(8032KiB/10012msec) 00:26:17.732 slat (usec): min=5, max=242, avg= 9.50, stdev= 7.49 00:26:17.732 clat (usec): min=359, max=41637, avg=19913.39, stdev=20210.43 00:26:17.732 lat (usec): min=365, max=41659, avg=19922.89, stdev=20210.25 00:26:17.732 clat percentiles (usec): 00:26:17.732 | 1.00th=[ 371], 5.00th=[ 383], 10.00th=[ 388], 20.00th=[ 400], 00:26:17.732 | 30.00th=[ 408], 40.00th=[ 420], 50.00th=[ 515], 60.00th=[40633], 00:26:17.732 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:17.732 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:26:17.732 | 99.99th=[41681] 00:26:17.732 bw ( KiB/s): min= 640, max= 1184, per=46.18%, avg=801.60, stdev=155.55, samples=20 00:26:17.732 iops : min= 160, max= 296, avg=200.40, stdev=38.89, samples=20 00:26:17.732 lat (usec) : 500=49.65%, 750=1.54%, 1000=0.40% 00:26:17.732 lat (msec) : 2=0.20%, 50=48.21% 00:26:17.732 cpu : usr=97.46%, sys=2.13%, ctx=31, majf=0, minf=9 00:26:17.732 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:17.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.732 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.732 issued rwts: total=2008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.732 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:17.732 filename1: (groupid=0, jobs=1): err= 0: pid=102240: Sun Dec 15 19:47:03 2024 00:26:17.732 read: IOPS=233, BW=933KiB/s (956kB/s)(9360KiB/10028msec) 00:26:17.732 slat (nsec): min=6137, max=66046, avg=11419.10, stdev=7595.68 00:26:17.732 clat (usec): min=366, max=41694, avg=17106.67, stdev=19890.88 00:26:17.732 lat (usec): min=374, max=41733, avg=17118.09, stdev=19890.00 00:26:17.732 clat percentiles (usec): 00:26:17.732 | 1.00th=[ 388], 5.00th=[ 400], 10.00th=[ 408], 20.00th=[ 420], 00:26:17.732 | 30.00th=[ 433], 40.00th=[ 449], 50.00th=[ 490], 60.00th=[40633], 00:26:17.732 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:17.732 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:26:17.732 | 99.99th=[41681] 00:26:17.732 bw ( KiB/s): min= 448, max= 1472, per=53.85%, avg=934.40, stdev=249.95, samples=20 00:26:17.732 iops : min= 112, max= 368, avg=233.60, stdev=62.49, samples=20 00:26:17.732 lat (usec) : 500=50.81%, 750=5.98%, 1000=1.84% 00:26:17.732 lat (msec) : 2=0.17%, 50=41.20% 00:26:17.732 cpu : usr=97.43%, sys=2.09%, ctx=7, majf=0, minf=0 00:26:17.732 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:17.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.732 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.732 issued rwts: total=2340,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.732 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:17.732 00:26:17.732 Run status group 0 (all jobs): 00:26:17.732 READ: bw=1734KiB/s (1776kB/s), 802KiB/s-933KiB/s (821kB/s-956kB/s), io=17.0MiB (17.8MB), run=10012-10028msec 00:26:17.732 19:47:03 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:26:17.732 19:47:03 -- target/dif.sh@43 -- # local sub 00:26:17.732 19:47:03 -- target/dif.sh@45 -- # for sub in "$@" 00:26:17.732 19:47:03 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:17.732 19:47:03 -- target/dif.sh@36 -- # local sub_id=0 00:26:17.732 19:47:03 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:17.732 19:47:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.732 19:47:03 -- common/autotest_common.sh@10 -- # set +x 00:26:17.732 19:47:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.732 19:47:03 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:17.732 19:47:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.732 19:47:03 -- common/autotest_common.sh@10 -- # set +x 00:26:17.732 19:47:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.732 19:47:03 -- target/dif.sh@45 -- # for sub in "$@" 00:26:17.732 19:47:03 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:17.732 19:47:03 -- target/dif.sh@36 -- # local sub_id=1 00:26:17.732 19:47:03 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:17.732 19:47:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.732 19:47:03 -- common/autotest_common.sh@10 -- # set +x 00:26:17.732 19:47:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.732 19:47:03 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:17.732 19:47:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.732 19:47:03 -- common/autotest_common.sh@10 -- # set +x 00:26:17.732 19:47:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.732 00:26:17.732 real 0m11.228s 00:26:17.732 user 0m20.352s 00:26:17.732 sys 0m0.701s 00:26:17.732 19:47:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:17.732 ************************************ 00:26:17.732 END TEST fio_dif_1_multi_subsystems 00:26:17.732 ************************************ 00:26:17.732 19:47:03 -- common/autotest_common.sh@10 -- # set +x 00:26:17.732 19:47:04 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:26:17.732 19:47:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:17.732 19:47:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:17.732 19:47:04 -- common/autotest_common.sh@10 -- # set +x 00:26:17.732 ************************************ 00:26:17.732 START TEST fio_dif_rand_params 00:26:17.732 ************************************ 00:26:17.732 19:47:04 -- common/autotest_common.sh@1114 -- # fio_dif_rand_params 00:26:17.732 19:47:04 -- target/dif.sh@100 -- # local NULL_DIF 00:26:17.732 19:47:04 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:26:17.732 19:47:04 -- target/dif.sh@103 -- # NULL_DIF=3 00:26:17.732 19:47:04 -- target/dif.sh@103 -- # bs=128k 00:26:17.732 19:47:04 -- target/dif.sh@103 -- # numjobs=3 00:26:17.732 19:47:04 -- target/dif.sh@103 -- # iodepth=3 00:26:17.732 19:47:04 -- target/dif.sh@103 -- # runtime=5 00:26:17.732 19:47:04 -- target/dif.sh@105 -- # create_subsystems 0 00:26:17.732 19:47:04 -- target/dif.sh@28 -- # local sub 00:26:17.732 19:47:04 -- target/dif.sh@30 -- # for sub in "$@" 00:26:17.732 19:47:04 -- target/dif.sh@31 -- # create_subsystem 0 00:26:17.732 19:47:04 -- target/dif.sh@18 -- # local sub_id=0 00:26:17.732 19:47:04 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:17.732 19:47:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.732 19:47:04 -- common/autotest_common.sh@10 -- # set +x 00:26:17.732 bdev_null0 00:26:17.732 19:47:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.732 19:47:04 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:17.732 19:47:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.732 19:47:04 -- common/autotest_common.sh@10 -- # set +x 00:26:17.732 19:47:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.732 19:47:04 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:17.732 19:47:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.732 19:47:04 -- common/autotest_common.sh@10 -- # set +x 00:26:17.732 19:47:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.732 19:47:04 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:17.732 19:47:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.732 19:47:04 -- common/autotest_common.sh@10 -- # set +x 00:26:17.732 [2024-12-15 19:47:04.073291] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:17.732 19:47:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.732 19:47:04 -- target/dif.sh@106 -- # fio /dev/fd/62 00:26:17.732 19:47:04 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:26:17.732 19:47:04 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:17.732 19:47:04 -- nvmf/common.sh@520 -- # config=() 00:26:17.732 19:47:04 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:17.732 19:47:04 -- nvmf/common.sh@520 -- # local subsystem config 00:26:17.732 19:47:04 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:17.732 19:47:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:17.732 19:47:04 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:17.732 19:47:04 -- target/dif.sh@82 -- # gen_fio_conf 00:26:17.732 19:47:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:17.732 { 00:26:17.732 "params": { 00:26:17.732 "name": "Nvme$subsystem", 00:26:17.732 "trtype": "$TEST_TRANSPORT", 00:26:17.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:17.732 "adrfam": "ipv4", 00:26:17.732 "trsvcid": "$NVMF_PORT", 00:26:17.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:17.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:17.732 "hdgst": ${hdgst:-false}, 00:26:17.732 "ddgst": ${ddgst:-false} 00:26:17.732 }, 00:26:17.732 "method": "bdev_nvme_attach_controller" 00:26:17.732 } 00:26:17.732 EOF 00:26:17.732 )") 00:26:17.732 19:47:04 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:17.732 19:47:04 -- target/dif.sh@54 -- # local file 00:26:17.732 19:47:04 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:17.732 19:47:04 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:17.732 19:47:04 -- target/dif.sh@56 -- # cat 00:26:17.732 19:47:04 -- common/autotest_common.sh@1330 -- # shift 00:26:17.733 19:47:04 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:17.733 19:47:04 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:17.733 19:47:04 -- nvmf/common.sh@542 -- # cat 00:26:17.733 19:47:04 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:17.733 19:47:04 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:17.733 19:47:04 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:17.733 19:47:04 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:17.733 19:47:04 -- target/dif.sh@72 -- # (( file <= files )) 00:26:17.733 19:47:04 -- nvmf/common.sh@544 -- # jq . 00:26:17.733 19:47:04 -- nvmf/common.sh@545 -- # IFS=, 00:26:17.733 19:47:04 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:17.733 "params": { 00:26:17.733 "name": "Nvme0", 00:26:17.733 "trtype": "tcp", 00:26:17.733 "traddr": "10.0.0.2", 00:26:17.733 "adrfam": "ipv4", 00:26:17.733 "trsvcid": "4420", 00:26:17.733 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:17.733 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:17.733 "hdgst": false, 00:26:17.733 "ddgst": false 00:26:17.733 }, 00:26:17.733 "method": "bdev_nvme_attach_controller" 00:26:17.733 }' 00:26:17.733 19:47:04 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:17.733 19:47:04 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:17.733 19:47:04 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:17.733 19:47:04 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:17.733 19:47:04 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:17.733 19:47:04 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:17.733 19:47:04 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:17.733 19:47:04 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:17.733 19:47:04 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:17.733 19:47:04 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:17.733 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:17.733 ... 00:26:17.733 fio-3.35 00:26:17.733 Starting 3 threads 00:26:18.001 [2024-12-15 19:47:04.709393] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:18.001 [2024-12-15 19:47:04.709472] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:23.267 00:26:23.267 filename0: (groupid=0, jobs=1): err= 0: pid=102394: Sun Dec 15 19:47:09 2024 00:26:23.267 read: IOPS=306, BW=38.4MiB/s (40.2MB/s)(192MiB/5004msec) 00:26:23.267 slat (nsec): min=7490, max=47469, avg=11662.05, stdev=5865.44 00:26:23.267 clat (usec): min=3622, max=46438, avg=9744.00, stdev=3770.57 00:26:23.267 lat (usec): min=3629, max=46446, avg=9755.66, stdev=3770.64 00:26:23.267 clat percentiles (usec): 00:26:23.267 | 1.00th=[ 3752], 5.00th=[ 3818], 10.00th=[ 3884], 20.00th=[ 7242], 00:26:23.267 | 30.00th=[ 7767], 40.00th=[ 8160], 50.00th=[10159], 60.00th=[12125], 00:26:23.267 | 70.00th=[12649], 80.00th=[13042], 90.00th=[13435], 95.00th=[13698], 00:26:23.267 | 99.00th=[14091], 99.50th=[14746], 99.90th=[44303], 99.95th=[46400], 00:26:23.267 | 99.99th=[46400] 00:26:23.267 bw ( KiB/s): min=29184, max=48384, per=36.35%, avg=38570.67, stdev=7744.79, samples=9 00:26:23.267 iops : min= 228, max= 378, avg=301.33, stdev=60.51, samples=9 00:26:23.267 lat (msec) : 4=14.26%, 10=35.16%, 20=50.39%, 50=0.20% 00:26:23.267 cpu : usr=93.12%, sys=5.04%, ctx=8, majf=0, minf=11 00:26:23.267 IO depths : 1=32.2%, 2=67.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:23.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.267 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.267 issued rwts: total=1536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.267 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:23.267 filename0: (groupid=0, jobs=1): err= 0: pid=102395: Sun Dec 15 19:47:09 2024 00:26:23.267 read: IOPS=257, BW=32.1MiB/s (33.7MB/s)(161MiB/5004msec) 00:26:23.267 slat (nsec): min=6386, max=52190, avg=12358.23, stdev=5378.30 00:26:23.267 clat (usec): min=5199, max=52621, avg=11646.19, stdev=9471.99 00:26:23.267 lat (usec): min=5209, max=52631, avg=11658.55, stdev=9471.61 00:26:23.267 clat percentiles (usec): 00:26:23.267 | 1.00th=[ 5604], 5.00th=[ 6325], 10.00th=[ 6718], 20.00th=[ 7308], 00:26:23.267 | 30.00th=[ 8979], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[10290], 00:26:23.267 | 70.00th=[10552], 80.00th=[10945], 90.00th=[11600], 95.00th=[47449], 00:26:23.267 | 99.00th=[51119], 99.50th=[51643], 99.90th=[52167], 99.95th=[52691], 00:26:23.267 | 99.99th=[52691] 00:26:23.267 bw ( KiB/s): min=26112, max=43776, per=32.41%, avg=34389.33, stdev=5937.88, samples=9 00:26:23.267 iops : min= 204, max= 342, avg=268.67, stdev=46.39, samples=9 00:26:23.267 lat (msec) : 10=48.10%, 20=46.31%, 50=2.49%, 100=3.11% 00:26:23.267 cpu : usr=93.32%, sys=5.20%, ctx=7, majf=0, minf=9 00:26:23.267 IO depths : 1=2.9%, 2=97.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:23.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.267 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.267 issued rwts: total=1287,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.267 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:23.267 filename0: (groupid=0, jobs=1): err= 0: pid=102396: Sun Dec 15 19:47:09 2024 00:26:23.267 read: IOPS=264, BW=33.1MiB/s (34.7MB/s)(166MiB/5003msec) 00:26:23.267 slat (nsec): min=6545, max=55163, avg=12614.46, stdev=5107.47 00:26:23.267 clat (usec): min=5155, max=51052, avg=11309.21, stdev=9923.61 00:26:23.267 lat (usec): min=5165, max=51061, avg=11321.83, stdev=9923.45 00:26:23.267 clat percentiles (usec): 00:26:23.267 | 1.00th=[ 5735], 5.00th=[ 6652], 10.00th=[ 7046], 20.00th=[ 7832], 00:26:23.267 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9241], 00:26:23.267 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[10290], 95.00th=[47973], 00:26:23.267 | 99.00th=[50070], 99.50th=[50070], 99.90th=[50594], 99.95th=[51119], 00:26:23.267 | 99.99th=[51119] 00:26:23.267 bw ( KiB/s): min=15360, max=42496, per=31.26%, avg=33166.22, stdev=8779.07, samples=9 00:26:23.267 iops : min= 120, max= 332, avg=259.11, stdev=68.59, samples=9 00:26:23.267 lat (msec) : 10=84.38%, 20=9.28%, 50=4.75%, 100=1.58% 00:26:23.267 cpu : usr=93.60%, sys=4.90%, ctx=6, majf=0, minf=0 00:26:23.267 IO depths : 1=2.6%, 2=97.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:23.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.267 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.267 issued rwts: total=1325,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.267 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:23.267 00:26:23.267 Run status group 0 (all jobs): 00:26:23.267 READ: bw=104MiB/s (109MB/s), 32.1MiB/s-38.4MiB/s (33.7MB/s-40.2MB/s), io=519MiB (544MB), run=5003-5004msec 00:26:23.267 19:47:10 -- target/dif.sh@107 -- # destroy_subsystems 0 00:26:23.267 19:47:10 -- target/dif.sh@43 -- # local sub 00:26:23.267 19:47:10 -- target/dif.sh@45 -- # for sub in "$@" 00:26:23.267 19:47:10 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:23.267 19:47:10 -- target/dif.sh@36 -- # local sub_id=0 00:26:23.267 19:47:10 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:23.267 19:47:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.267 19:47:10 -- common/autotest_common.sh@10 -- # set +x 00:26:23.267 19:47:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.267 19:47:10 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:23.267 19:47:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.267 19:47:10 -- common/autotest_common.sh@10 -- # set +x 00:26:23.267 19:47:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.267 19:47:10 -- target/dif.sh@109 -- # NULL_DIF=2 00:26:23.267 19:47:10 -- target/dif.sh@109 -- # bs=4k 00:26:23.267 19:47:10 -- target/dif.sh@109 -- # numjobs=8 00:26:23.267 19:47:10 -- target/dif.sh@109 -- # iodepth=16 00:26:23.267 19:47:10 -- target/dif.sh@109 -- # runtime= 00:26:23.267 19:47:10 -- target/dif.sh@109 -- # files=2 00:26:23.267 19:47:10 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:26:23.267 19:47:10 -- target/dif.sh@28 -- # local sub 00:26:23.267 19:47:10 -- target/dif.sh@30 -- # for sub in "$@" 00:26:23.267 19:47:10 -- target/dif.sh@31 -- # create_subsystem 0 00:26:23.267 19:47:10 -- target/dif.sh@18 -- # local sub_id=0 00:26:23.267 19:47:10 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:26:23.267 19:47:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.267 19:47:10 -- common/autotest_common.sh@10 -- # set +x 00:26:23.267 bdev_null0 00:26:23.267 19:47:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.267 19:47:10 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:23.267 19:47:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.267 19:47:10 -- common/autotest_common.sh@10 -- # set +x 00:26:23.267 19:47:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.267 19:47:10 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:23.267 19:47:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.267 19:47:10 -- common/autotest_common.sh@10 -- # set +x 00:26:23.267 19:47:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.267 19:47:10 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:23.267 19:47:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.267 19:47:10 -- common/autotest_common.sh@10 -- # set +x 00:26:23.267 [2024-12-15 19:47:10.128301] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:23.267 19:47:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.267 19:47:10 -- target/dif.sh@30 -- # for sub in "$@" 00:26:23.267 19:47:10 -- target/dif.sh@31 -- # create_subsystem 1 00:26:23.267 19:47:10 -- target/dif.sh@18 -- # local sub_id=1 00:26:23.267 19:47:10 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:26:23.267 19:47:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.267 19:47:10 -- common/autotest_common.sh@10 -- # set +x 00:26:23.267 bdev_null1 00:26:23.267 19:47:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.267 19:47:10 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:23.267 19:47:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.267 19:47:10 -- common/autotest_common.sh@10 -- # set +x 00:26:23.267 19:47:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.267 19:47:10 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:23.267 19:47:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.267 19:47:10 -- common/autotest_common.sh@10 -- # set +x 00:26:23.267 19:47:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.267 19:47:10 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:23.267 19:47:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.267 19:47:10 -- common/autotest_common.sh@10 -- # set +x 00:26:23.527 19:47:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.527 19:47:10 -- target/dif.sh@30 -- # for sub in "$@" 00:26:23.527 19:47:10 -- target/dif.sh@31 -- # create_subsystem 2 00:26:23.527 19:47:10 -- target/dif.sh@18 -- # local sub_id=2 00:26:23.527 19:47:10 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:26:23.527 19:47:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.527 19:47:10 -- common/autotest_common.sh@10 -- # set +x 00:26:23.527 bdev_null2 00:26:23.527 19:47:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.527 19:47:10 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:26:23.527 19:47:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.527 19:47:10 -- common/autotest_common.sh@10 -- # set +x 00:26:23.527 19:47:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.527 19:47:10 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:26:23.527 19:47:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.527 19:47:10 -- common/autotest_common.sh@10 -- # set +x 00:26:23.527 19:47:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.527 19:47:10 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:23.527 19:47:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.527 19:47:10 -- common/autotest_common.sh@10 -- # set +x 00:26:23.527 19:47:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.527 19:47:10 -- target/dif.sh@112 -- # fio /dev/fd/62 00:26:23.527 19:47:10 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:26:23.527 19:47:10 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:26:23.527 19:47:10 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:23.527 19:47:10 -- nvmf/common.sh@520 -- # config=() 00:26:23.527 19:47:10 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:23.527 19:47:10 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:23.527 19:47:10 -- nvmf/common.sh@520 -- # local subsystem config 00:26:23.527 19:47:10 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:23.527 19:47:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:23.527 19:47:10 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:23.527 19:47:10 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:23.527 19:47:10 -- target/dif.sh@82 -- # gen_fio_conf 00:26:23.527 19:47:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:23.527 { 00:26:23.527 "params": { 00:26:23.527 "name": "Nvme$subsystem", 00:26:23.527 "trtype": "$TEST_TRANSPORT", 00:26:23.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.527 "adrfam": "ipv4", 00:26:23.527 "trsvcid": "$NVMF_PORT", 00:26:23.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.527 "hdgst": ${hdgst:-false}, 00:26:23.527 "ddgst": ${ddgst:-false} 00:26:23.527 }, 00:26:23.527 "method": "bdev_nvme_attach_controller" 00:26:23.527 } 00:26:23.527 EOF 00:26:23.527 )") 00:26:23.527 19:47:10 -- common/autotest_common.sh@1330 -- # shift 00:26:23.527 19:47:10 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:23.527 19:47:10 -- target/dif.sh@54 -- # local file 00:26:23.527 19:47:10 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:23.527 19:47:10 -- target/dif.sh@56 -- # cat 00:26:23.527 19:47:10 -- nvmf/common.sh@542 -- # cat 00:26:23.527 19:47:10 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:23.527 19:47:10 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:23.527 19:47:10 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:23.527 19:47:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:23.527 19:47:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:23.527 { 00:26:23.527 "params": { 00:26:23.527 "name": "Nvme$subsystem", 00:26:23.527 "trtype": "$TEST_TRANSPORT", 00:26:23.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.527 "adrfam": "ipv4", 00:26:23.527 "trsvcid": "$NVMF_PORT", 00:26:23.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.527 "hdgst": ${hdgst:-false}, 00:26:23.527 "ddgst": ${ddgst:-false} 00:26:23.527 }, 00:26:23.527 "method": "bdev_nvme_attach_controller" 00:26:23.527 } 00:26:23.527 EOF 00:26:23.527 )") 00:26:23.527 19:47:10 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:23.527 19:47:10 -- target/dif.sh@72 -- # (( file <= files )) 00:26:23.527 19:47:10 -- target/dif.sh@73 -- # cat 00:26:23.527 19:47:10 -- nvmf/common.sh@542 -- # cat 00:26:23.527 19:47:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:23.527 19:47:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:23.527 { 00:26:23.527 "params": { 00:26:23.527 "name": "Nvme$subsystem", 00:26:23.527 "trtype": "$TEST_TRANSPORT", 00:26:23.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.527 "adrfam": "ipv4", 00:26:23.527 "trsvcid": "$NVMF_PORT", 00:26:23.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.527 "hdgst": ${hdgst:-false}, 00:26:23.527 "ddgst": ${ddgst:-false} 00:26:23.527 }, 00:26:23.527 "method": "bdev_nvme_attach_controller" 00:26:23.527 } 00:26:23.527 EOF 00:26:23.527 )") 00:26:23.527 19:47:10 -- target/dif.sh@72 -- # (( file++ )) 00:26:23.527 19:47:10 -- nvmf/common.sh@542 -- # cat 00:26:23.527 19:47:10 -- target/dif.sh@72 -- # (( file <= files )) 00:26:23.527 19:47:10 -- target/dif.sh@73 -- # cat 00:26:23.527 19:47:10 -- target/dif.sh@72 -- # (( file++ )) 00:26:23.527 19:47:10 -- target/dif.sh@72 -- # (( file <= files )) 00:26:23.527 19:47:10 -- nvmf/common.sh@544 -- # jq . 00:26:23.527 19:47:10 -- nvmf/common.sh@545 -- # IFS=, 00:26:23.527 19:47:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:23.527 "params": { 00:26:23.527 "name": "Nvme0", 00:26:23.527 "trtype": "tcp", 00:26:23.527 "traddr": "10.0.0.2", 00:26:23.527 "adrfam": "ipv4", 00:26:23.527 "trsvcid": "4420", 00:26:23.527 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:23.527 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:23.527 "hdgst": false, 00:26:23.527 "ddgst": false 00:26:23.527 }, 00:26:23.527 "method": "bdev_nvme_attach_controller" 00:26:23.527 },{ 00:26:23.527 "params": { 00:26:23.527 "name": "Nvme1", 00:26:23.527 "trtype": "tcp", 00:26:23.527 "traddr": "10.0.0.2", 00:26:23.527 "adrfam": "ipv4", 00:26:23.527 "trsvcid": "4420", 00:26:23.527 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:23.527 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:23.527 "hdgst": false, 00:26:23.527 "ddgst": false 00:26:23.527 }, 00:26:23.527 "method": "bdev_nvme_attach_controller" 00:26:23.528 },{ 00:26:23.528 "params": { 00:26:23.528 "name": "Nvme2", 00:26:23.528 "trtype": "tcp", 00:26:23.528 "traddr": "10.0.0.2", 00:26:23.528 "adrfam": "ipv4", 00:26:23.528 "trsvcid": "4420", 00:26:23.528 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:23.528 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:23.528 "hdgst": false, 00:26:23.528 "ddgst": false 00:26:23.528 }, 00:26:23.528 "method": "bdev_nvme_attach_controller" 00:26:23.528 }' 00:26:23.528 19:47:10 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:23.528 19:47:10 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:23.528 19:47:10 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:23.528 19:47:10 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:23.528 19:47:10 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:23.528 19:47:10 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:23.528 19:47:10 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:23.528 19:47:10 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:23.528 19:47:10 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:23.528 19:47:10 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:23.786 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:23.786 ... 00:26:23.786 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:23.786 ... 00:26:23.786 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:23.786 ... 00:26:23.786 fio-3.35 00:26:23.786 Starting 24 threads 00:26:24.351 [2024-12-15 19:47:11.082626] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:24.352 [2024-12-15 19:47:11.082698] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:36.609 00:26:36.609 filename0: (groupid=0, jobs=1): err= 0: pid=102497: Sun Dec 15 19:47:21 2024 00:26:36.609 read: IOPS=229, BW=919KiB/s (941kB/s)(9220KiB/10030msec) 00:26:36.609 slat (usec): min=4, max=8023, avg=17.04, stdev=167.04 00:26:36.609 clat (msec): min=24, max=131, avg=69.46, stdev=16.53 00:26:36.609 lat (msec): min=24, max=131, avg=69.47, stdev=16.53 00:26:36.609 clat percentiles (msec): 00:26:36.609 | 1.00th=[ 34], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 58], 00:26:36.609 | 30.00th=[ 61], 40.00th=[ 62], 50.00th=[ 70], 60.00th=[ 72], 00:26:36.609 | 70.00th=[ 77], 80.00th=[ 84], 90.00th=[ 94], 95.00th=[ 100], 00:26:36.609 | 99.00th=[ 111], 99.50th=[ 115], 99.90th=[ 132], 99.95th=[ 132], 00:26:36.609 | 99.99th=[ 132] 00:26:36.609 bw ( KiB/s): min= 768, max= 1120, per=3.86%, avg=915.45, stdev=108.83, samples=20 00:26:36.609 iops : min= 192, max= 280, avg=228.85, stdev=27.19, samples=20 00:26:36.609 lat (msec) : 50=14.14%, 100=82.17%, 250=3.69% 00:26:36.609 cpu : usr=32.31%, sys=0.54%, ctx=927, majf=0, minf=9 00:26:36.609 IO depths : 1=2.3%, 2=4.9%, 4=14.1%, 8=67.5%, 16=11.1%, 32=0.0%, >=64=0.0% 00:26:36.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.609 complete : 0=0.0%, 4=91.2%, 8=4.1%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.609 issued rwts: total=2305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.609 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:36.609 filename0: (groupid=0, jobs=1): err= 0: pid=102498: Sun Dec 15 19:47:21 2024 00:26:36.609 read: IOPS=221, BW=884KiB/s (906kB/s)(8864KiB/10024msec) 00:26:36.609 slat (nsec): min=4964, max=51462, avg=12456.63, stdev=7155.32 00:26:36.609 clat (msec): min=31, max=126, avg=72.28, stdev=17.10 00:26:36.609 lat (msec): min=31, max=126, avg=72.30, stdev=17.10 00:26:36.609 clat percentiles (msec): 00:26:36.609 | 1.00th=[ 37], 5.00th=[ 47], 10.00th=[ 51], 20.00th=[ 60], 00:26:36.609 | 30.00th=[ 63], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 72], 00:26:36.609 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 106], 00:26:36.609 | 99.00th=[ 120], 99.50th=[ 127], 99.90th=[ 127], 99.95th=[ 127], 00:26:36.609 | 99.99th=[ 127] 00:26:36.609 bw ( KiB/s): min= 768, max= 1072, per=3.71%, avg=879.05, stdev=90.64, samples=19 00:26:36.609 iops : min= 192, max= 268, avg=219.74, stdev=22.66, samples=19 00:26:36.609 lat (msec) : 50=9.70%, 100=84.03%, 250=6.27% 00:26:36.609 cpu : usr=35.32%, sys=0.48%, ctx=1104, majf=0, minf=9 00:26:36.609 IO depths : 1=2.2%, 2=5.1%, 4=15.2%, 8=66.7%, 16=10.9%, 32=0.0%, >=64=0.0% 00:26:36.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.609 complete : 0=0.0%, 4=91.1%, 8=3.8%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.610 issued rwts: total=2216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.610 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:36.610 filename0: (groupid=0, jobs=1): err= 0: pid=102499: Sun Dec 15 19:47:21 2024 00:26:36.610 read: IOPS=257, BW=1031KiB/s (1056kB/s)(10.1MiB/10061msec) 00:26:36.610 slat (usec): min=4, max=8024, avg=21.51, stdev=272.33 00:26:36.610 clat (msec): min=32, max=119, avg=61.91, stdev=17.26 00:26:36.610 lat (msec): min=32, max=119, avg=61.93, stdev=17.27 00:26:36.610 clat percentiles (msec): 00:26:36.610 | 1.00th=[ 34], 5.00th=[ 37], 10.00th=[ 45], 20.00th=[ 47], 00:26:36.610 | 30.00th=[ 48], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 62], 00:26:36.610 | 70.00th=[ 71], 80.00th=[ 75], 90.00th=[ 85], 95.00th=[ 96], 00:26:36.610 | 99.00th=[ 109], 99.50th=[ 111], 99.90th=[ 121], 99.95th=[ 121], 00:26:36.610 | 99.99th=[ 121] 00:26:36.610 bw ( KiB/s): min= 768, max= 1200, per=4.35%, avg=1031.05, stdev=137.73, samples=20 00:26:36.610 iops : min= 192, max= 300, avg=257.70, stdev=34.41, samples=20 00:26:36.610 lat (msec) : 50=33.58%, 100=64.19%, 250=2.24% 00:26:36.610 cpu : usr=32.65%, sys=0.43%, ctx=913, majf=0, minf=9 00:26:36.610 IO depths : 1=1.1%, 2=2.7%, 4=10.0%, 8=74.0%, 16=12.2%, 32=0.0%, >=64=0.0% 00:26:36.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.610 complete : 0=0.0%, 4=90.1%, 8=5.1%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.610 issued rwts: total=2594,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.610 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:36.610 filename0: (groupid=0, jobs=1): err= 0: pid=102500: Sun Dec 15 19:47:21 2024 00:26:36.610 read: IOPS=257, BW=1031KiB/s (1056kB/s)(10.1MiB/10072msec) 00:26:36.610 slat (usec): min=7, max=8028, avg=27.77, stdev=326.96 00:26:36.610 clat (msec): min=2, max=131, avg=61.82, stdev=21.90 00:26:36.610 lat (msec): min=2, max=131, avg=61.85, stdev=21.90 00:26:36.610 clat percentiles (msec): 00:26:36.610 | 1.00th=[ 4], 5.00th=[ 33], 10.00th=[ 39], 20.00th=[ 46], 00:26:36.610 | 30.00th=[ 48], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 68], 00:26:36.610 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 88], 95.00th=[ 97], 00:26:36.610 | 99.00th=[ 121], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 132], 00:26:36.610 | 99.99th=[ 132] 00:26:36.610 bw ( KiB/s): min= 768, max= 2020, per=4.35%, avg=1032.20, stdev=260.48, samples=20 00:26:36.610 iops : min= 192, max= 505, avg=258.05, stdev=65.12, samples=20 00:26:36.610 lat (msec) : 4=2.54%, 10=0.54%, 20=1.12%, 50=28.51%, 100=63.17% 00:26:36.610 lat (msec) : 250=4.12% 00:26:36.610 cpu : usr=36.69%, sys=0.50%, ctx=1042, majf=0, minf=9 00:26:36.610 IO depths : 1=0.6%, 2=1.2%, 4=7.6%, 8=76.8%, 16=13.8%, 32=0.0%, >=64=0.0% 00:26:36.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.610 complete : 0=0.0%, 4=89.2%, 8=7.0%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.610 issued rwts: total=2596,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.610 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:36.610 filename0: (groupid=0, jobs=1): err= 0: pid=102501: Sun Dec 15 19:47:21 2024 00:26:36.610 read: IOPS=277, BW=1110KiB/s (1136kB/s)(10.9MiB/10058msec) 00:26:36.610 slat (usec): min=4, max=8028, avg=22.05, stdev=236.54 00:26:36.610 clat (msec): min=9, max=119, avg=57.43, stdev=17.38 00:26:36.610 lat (msec): min=9, max=119, avg=57.45, stdev=17.38 00:26:36.610 clat percentiles (msec): 00:26:36.610 | 1.00th=[ 14], 5.00th=[ 34], 10.00th=[ 39], 20.00th=[ 44], 00:26:36.610 | 30.00th=[ 47], 40.00th=[ 50], 50.00th=[ 56], 60.00th=[ 61], 00:26:36.610 | 70.00th=[ 67], 80.00th=[ 72], 90.00th=[ 83], 95.00th=[ 88], 00:26:36.610 | 99.00th=[ 99], 99.50th=[ 104], 99.90th=[ 121], 99.95th=[ 121], 00:26:36.610 | 99.99th=[ 121] 00:26:36.610 bw ( KiB/s): min= 896, max= 1376, per=4.68%, avg=1109.60, stdev=140.55, samples=20 00:26:36.610 iops : min= 224, max= 344, avg=277.40, stdev=35.14, samples=20 00:26:36.610 lat (msec) : 10=0.57%, 20=1.15%, 50=39.07%, 100=58.53%, 250=0.68% 00:26:36.610 cpu : usr=42.08%, sys=0.68%, ctx=1382, majf=0, minf=9 00:26:36.610 IO depths : 1=0.6%, 2=1.3%, 4=7.5%, 8=77.5%, 16=13.2%, 32=0.0%, >=64=0.0% 00:26:36.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.610 complete : 0=0.0%, 4=89.4%, 8=6.3%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.610 issued rwts: total=2790,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.610 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:36.610 filename0: (groupid=0, jobs=1): err= 0: pid=102502: Sun Dec 15 19:47:21 2024 00:26:36.610 read: IOPS=225, BW=902KiB/s (924kB/s)(9052KiB/10035msec) 00:26:36.610 slat (usec): min=3, max=8030, avg=24.62, stdev=238.57 00:26:36.610 clat (msec): min=35, max=140, avg=70.67, stdev=18.25 00:26:36.610 lat (msec): min=35, max=140, avg=70.69, stdev=18.25 00:26:36.610 clat percentiles (msec): 00:26:36.610 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 52], 20.00th=[ 59], 00:26:36.610 | 30.00th=[ 63], 40.00th=[ 65], 50.00th=[ 67], 60.00th=[ 71], 00:26:36.610 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 106], 00:26:36.610 | 99.00th=[ 128], 99.50th=[ 140], 99.90th=[ 140], 99.95th=[ 140], 00:26:36.610 | 99.99th=[ 140] 00:26:36.610 bw ( KiB/s): min= 640, max= 1072, per=3.79%, avg=898.70, stdev=105.58, samples=20 00:26:36.610 iops : min= 160, max= 268, avg=224.65, stdev=26.38, samples=20 00:26:36.610 lat (msec) : 50=8.84%, 100=82.72%, 250=8.44% 00:26:36.610 cpu : usr=44.03%, sys=0.66%, ctx=1275, majf=0, minf=9 00:26:36.610 IO depths : 1=2.7%, 2=5.6%, 4=14.2%, 8=66.7%, 16=10.9%, 32=0.0%, >=64=0.0% 00:26:36.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.610 complete : 0=0.0%, 4=91.4%, 8=3.9%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.610 issued rwts: total=2263,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.610 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:36.610 filename0: (groupid=0, jobs=1): err= 0: pid=102503: Sun Dec 15 19:47:21 2024 00:26:36.610 read: IOPS=229, BW=919KiB/s (941kB/s)(9200KiB/10015msec) 00:26:36.610 slat (usec): min=4, max=8040, avg=16.14, stdev=167.55 00:26:36.610 clat (msec): min=31, max=144, avg=69.57, stdev=19.25 00:26:36.610 lat (msec): min=31, max=144, avg=69.58, stdev=19.24 00:26:36.610 clat percentiles (msec): 00:26:36.610 | 1.00th=[ 37], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 56], 00:26:36.610 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 67], 60.00th=[ 71], 00:26:36.610 | 70.00th=[ 74], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 105], 00:26:36.610 | 99.00th=[ 125], 99.50th=[ 136], 99.90th=[ 144], 99.95th=[ 146], 00:26:36.610 | 99.99th=[ 146] 00:26:36.610 bw ( KiB/s): min= 640, max= 1128, per=3.88%, avg=919.16, stdev=120.68, samples=19 00:26:36.610 iops : min= 160, max= 282, avg=229.79, stdev=30.17, samples=19 00:26:36.610 lat (msec) : 50=17.39%, 100=76.83%, 250=5.78% 00:26:36.610 cpu : usr=41.92%, sys=0.67%, ctx=1226, majf=0, minf=9 00:26:36.610 IO depths : 1=2.1%, 2=4.8%, 4=14.2%, 8=67.5%, 16=11.3%, 32=0.0%, >=64=0.0% 00:26:36.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.610 complete : 0=0.0%, 4=91.3%, 8=4.0%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.610 issued rwts: total=2300,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.610 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:36.610 filename0: (groupid=0, jobs=1): err= 0: pid=102504: Sun Dec 15 19:47:21 2024 00:26:36.610 read: IOPS=226, BW=904KiB/s (926kB/s)(9060KiB/10019msec) 00:26:36.610 slat (usec): min=4, max=6247, avg=19.33, stdev=188.16 00:26:36.610 clat (msec): min=31, max=163, avg=70.63, stdev=19.47 00:26:36.610 lat (msec): min=31, max=163, avg=70.65, stdev=19.47 00:26:36.610 clat percentiles (msec): 00:26:36.610 | 1.00th=[ 35], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 59], 00:26:36.610 | 30.00th=[ 62], 40.00th=[ 64], 50.00th=[ 68], 60.00th=[ 71], 00:26:36.610 | 70.00th=[ 75], 80.00th=[ 88], 90.00th=[ 96], 95.00th=[ 105], 00:26:36.610 | 99.00th=[ 129], 99.50th=[ 140], 99.90th=[ 163], 99.95th=[ 163], 00:26:36.610 | 99.99th=[ 163] 00:26:36.610 bw ( KiB/s): min= 640, max= 1200, per=3.80%, avg=900.63, stdev=116.40, samples=19 00:26:36.610 iops : min= 160, max= 300, avg=225.16, stdev=29.10, samples=19 00:26:36.610 lat (msec) : 50=12.32%, 100=81.59%, 250=6.09% 00:26:36.610 cpu : usr=43.46%, sys=0.82%, ctx=1202, majf=0, minf=9 00:26:36.610 IO depths : 1=2.2%, 2=5.3%, 4=15.7%, 8=66.0%, 16=10.8%, 32=0.0%, >=64=0.0% 00:26:36.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.610 complete : 0=0.0%, 4=91.4%, 8=3.4%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.610 issued rwts: total=2265,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.610 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:36.610 filename1: (groupid=0, jobs=1): err= 0: pid=102505: Sun Dec 15 19:47:21 2024 00:26:36.610 read: IOPS=303, BW=1215KiB/s (1244kB/s)(11.9MiB/10035msec) 00:26:36.610 slat (usec): min=7, max=4017, avg=12.58, stdev=72.84 00:26:36.610 clat (usec): min=1808, max=119266, avg=52529.66, stdev=17149.88 00:26:36.610 lat (usec): min=1828, max=119280, avg=52542.24, stdev=17148.39 00:26:36.610 clat percentiles (msec): 00:26:36.610 | 1.00th=[ 3], 5.00th=[ 31], 10.00th=[ 36], 20.00th=[ 41], 00:26:36.610 | 30.00th=[ 45], 40.00th=[ 48], 50.00th=[ 51], 60.00th=[ 55], 00:26:36.610 | 70.00th=[ 61], 80.00th=[ 66], 90.00th=[ 74], 95.00th=[ 80], 00:26:36.610 | 99.00th=[ 96], 99.50th=[ 100], 99.90th=[ 116], 99.95th=[ 120], 00:26:36.610 | 99.99th=[ 120] 00:26:36.610 bw ( KiB/s): min= 944, max= 2144, per=5.11%, avg=1212.80, stdev=257.57, samples=20 00:26:36.610 iops : min= 236, max= 536, avg=303.20, stdev=64.39, samples=20 00:26:36.610 lat (msec) : 2=0.36%, 4=1.74%, 10=1.05%, 20=1.05%, 50=44.16% 00:26:36.610 lat (msec) : 100=51.28%, 250=0.36% 00:26:36.610 cpu : usr=42.25%, sys=0.71%, ctx=1453, majf=0, minf=9 00:26:36.610 IO depths : 1=0.8%, 2=1.8%, 4=8.9%, 8=75.8%, 16=12.8%, 32=0.0%, >=64=0.0% 00:26:36.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.610 complete : 0=0.0%, 4=89.9%, 8=5.6%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.610 issued rwts: total=3048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.610 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:36.610 filename1: (groupid=0, jobs=1): err= 0: pid=102506: Sun Dec 15 19:47:21 2024 00:26:36.610 read: IOPS=273, BW=1094KiB/s (1120kB/s)(10.7MiB/10056msec) 00:26:36.610 slat (usec): min=5, max=8023, avg=18.19, stdev=216.13 00:26:36.610 clat (msec): min=24, max=131, avg=58.33, stdev=16.54 00:26:36.610 lat (msec): min=24, max=131, avg=58.35, stdev=16.54 00:26:36.610 clat percentiles (msec): 00:26:36.610 | 1.00th=[ 30], 5.00th=[ 36], 10.00th=[ 39], 20.00th=[ 46], 00:26:36.610 | 30.00th=[ 48], 40.00th=[ 51], 50.00th=[ 59], 60.00th=[ 61], 00:26:36.610 | 70.00th=[ 65], 80.00th=[ 72], 90.00th=[ 81], 95.00th=[ 86], 00:26:36.610 | 99.00th=[ 109], 99.50th=[ 110], 99.90th=[ 132], 99.95th=[ 132], 00:26:36.610 | 99.99th=[ 132] 00:26:36.610 bw ( KiB/s): min= 720, max= 1296, per=4.61%, avg=1093.70, stdev=132.97, samples=20 00:26:36.610 iops : min= 180, max= 324, avg=273.40, stdev=33.25, samples=20 00:26:36.610 lat (msec) : 50=39.64%, 100=58.07%, 250=2.29% 00:26:36.610 cpu : usr=35.90%, sys=0.55%, ctx=1016, majf=0, minf=9 00:26:36.611 IO depths : 1=0.6%, 2=1.5%, 4=7.3%, 8=77.0%, 16=13.7%, 32=0.0%, >=64=0.0% 00:26:36.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.611 complete : 0=0.0%, 4=89.6%, 8=6.6%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.611 issued rwts: total=2750,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.611 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:36.611 filename1: (groupid=0, jobs=1): err= 0: pid=102507: Sun Dec 15 19:47:21 2024 00:26:36.611 read: IOPS=247, BW=989KiB/s (1013kB/s)(9956KiB/10063msec) 00:26:36.611 slat (usec): min=4, max=8030, avg=18.17, stdev=227.22 00:26:36.611 clat (msec): min=11, max=131, avg=64.39, stdev=17.96 00:26:36.611 lat (msec): min=11, max=131, avg=64.41, stdev=17.97 00:26:36.611 clat percentiles (msec): 00:26:36.611 | 1.00th=[ 29], 5.00th=[ 37], 10.00th=[ 45], 20.00th=[ 48], 00:26:36.611 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 62], 60.00th=[ 70], 00:26:36.611 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 87], 95.00th=[ 96], 00:26:36.611 | 99.00th=[ 109], 99.50th=[ 120], 99.90th=[ 132], 99.95th=[ 132], 00:26:36.611 | 99.99th=[ 132] 00:26:36.611 bw ( KiB/s): min= 768, max= 1248, per=4.17%, avg=988.60, stdev=108.43, samples=20 00:26:36.611 iops : min= 192, max= 312, avg=247.10, stdev=27.07, samples=20 00:26:36.611 lat (msec) : 20=0.44%, 50=26.96%, 100=70.23%, 250=2.37% 00:26:36.611 cpu : usr=32.56%, sys=0.49%, ctx=913, majf=0, minf=9 00:26:36.611 IO depths : 1=0.5%, 2=1.1%, 4=6.5%, 8=77.9%, 16=13.9%, 32=0.0%, >=64=0.0% 00:26:36.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.611 complete : 0=0.0%, 4=89.4%, 8=6.9%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.611 issued rwts: total=2489,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.611 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:36.611 filename1: (groupid=0, jobs=1): err= 0: pid=102508: Sun Dec 15 19:47:21 2024 00:26:36.611 read: IOPS=260, BW=1042KiB/s (1067kB/s)(10.2MiB/10053msec) 00:26:36.611 slat (usec): min=4, max=12033, avg=37.88, stdev=417.35 00:26:36.611 clat (msec): min=25, max=121, avg=61.15, stdev=16.09 00:26:36.611 lat (msec): min=25, max=121, avg=61.18, stdev=16.09 00:26:36.611 clat percentiles (msec): 00:26:36.611 | 1.00th=[ 32], 5.00th=[ 40], 10.00th=[ 43], 20.00th=[ 48], 00:26:36.611 | 30.00th=[ 51], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 63], 00:26:36.611 | 70.00th=[ 68], 80.00th=[ 73], 90.00th=[ 84], 95.00th=[ 91], 00:26:36.611 | 99.00th=[ 110], 99.50th=[ 113], 99.90th=[ 123], 99.95th=[ 123], 00:26:36.611 | 99.99th=[ 123] 00:26:36.611 bw ( KiB/s): min= 808, max= 1208, per=4.39%, avg=1040.25, stdev=110.55, samples=20 00:26:36.611 iops : min= 202, max= 302, avg=260.05, stdev=27.63, samples=20 00:26:36.611 lat (msec) : 50=28.30%, 100=69.67%, 250=2.02% 00:26:36.611 cpu : usr=45.41%, sys=0.80%, ctx=1199, majf=0, minf=9 00:26:36.611 IO depths : 1=1.3%, 2=3.0%, 4=10.4%, 8=73.1%, 16=12.1%, 32=0.0%, >=64=0.0% 00:26:36.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.611 complete : 0=0.0%, 4=90.2%, 8=5.1%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.611 issued rwts: total=2618,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.611 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:36.611 filename1: (groupid=0, jobs=1): err= 0: pid=102509: Sun Dec 15 19:47:21 2024 00:26:36.611 read: IOPS=258, BW=1034KiB/s (1059kB/s)(10.1MiB/10020msec) 00:26:36.611 slat (usec): min=3, max=10018, avg=30.19, stdev=360.87 00:26:36.611 clat (msec): min=16, max=120, avg=61.66, stdev=19.61 00:26:36.611 lat (msec): min=16, max=120, avg=61.69, stdev=19.61 00:26:36.611 clat percentiles (msec): 00:26:36.611 | 1.00th=[ 19], 5.00th=[ 36], 10.00th=[ 39], 20.00th=[ 45], 00:26:36.611 | 30.00th=[ 48], 40.00th=[ 56], 50.00th=[ 60], 60.00th=[ 63], 00:26:36.611 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 89], 95.00th=[ 96], 00:26:36.611 | 99.00th=[ 111], 99.50th=[ 120], 99.90th=[ 120], 99.95th=[ 121], 00:26:36.611 | 99.99th=[ 121] 00:26:36.611 bw ( KiB/s): min= 736, max= 1344, per=4.34%, avg=1029.60, stdev=175.49, samples=20 00:26:36.611 iops : min= 184, max= 336, avg=257.35, stdev=43.83, samples=20 00:26:36.611 lat (msec) : 20=1.24%, 50=33.62%, 100=62.25%, 250=2.89% 00:26:36.611 cpu : usr=38.65%, sys=0.54%, ctx=1191, majf=0, minf=9 00:26:36.611 IO depths : 1=1.5%, 2=3.5%, 4=11.0%, 8=72.2%, 16=11.8%, 32=0.0%, >=64=0.0% 00:26:36.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.611 complete : 0=0.0%, 4=90.3%, 8=4.9%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.611 issued rwts: total=2591,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.611 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:36.611 filename1: (groupid=0, jobs=1): err= 0: pid=102510: Sun Dec 15 19:47:21 2024 00:26:36.611 read: IOPS=255, BW=1021KiB/s (1045kB/s)(10.0MiB/10038msec) 00:26:36.611 slat (usec): min=4, max=253, avg=12.59, stdev= 8.39 00:26:36.611 clat (msec): min=26, max=128, avg=62.57, stdev=17.19 00:26:36.611 lat (msec): min=26, max=128, avg=62.58, stdev=17.19 00:26:36.611 clat percentiles (msec): 00:26:36.611 | 1.00th=[ 34], 5.00th=[ 37], 10.00th=[ 42], 20.00th=[ 48], 00:26:36.611 | 30.00th=[ 53], 40.00th=[ 60], 50.00th=[ 63], 60.00th=[ 66], 00:26:36.611 | 70.00th=[ 69], 80.00th=[ 73], 90.00th=[ 84], 95.00th=[ 96], 00:26:36.611 | 99.00th=[ 114], 99.50th=[ 120], 99.90th=[ 129], 99.95th=[ 129], 00:26:36.611 | 99.99th=[ 129] 00:26:36.611 bw ( KiB/s): min= 640, max= 1328, per=4.29%, avg=1018.00, stdev=142.70, samples=20 00:26:36.611 iops : min= 160, max= 332, avg=254.50, stdev=35.68, samples=20 00:26:36.611 lat (msec) : 50=28.27%, 100=67.71%, 250=4.02% 00:26:36.611 cpu : usr=41.22%, sys=0.54%, ctx=1186, majf=0, minf=9 00:26:36.611 IO depths : 1=1.7%, 2=3.6%, 4=10.7%, 8=71.7%, 16=12.3%, 32=0.0%, >=64=0.0% 00:26:36.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.611 complete : 0=0.0%, 4=90.2%, 8=5.6%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.611 issued rwts: total=2561,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.611 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:36.611 filename1: (groupid=0, jobs=1): err= 0: pid=102511: Sun Dec 15 19:47:21 2024 00:26:36.611 read: IOPS=230, BW=923KiB/s (945kB/s)(9264KiB/10040msec) 00:26:36.611 slat (usec): min=4, max=8063, avg=22.59, stdev=250.44 00:26:36.611 clat (msec): min=35, max=170, avg=69.22, stdev=19.23 00:26:36.611 lat (msec): min=35, max=170, avg=69.25, stdev=19.24 00:26:36.611 clat percentiles (msec): 00:26:36.611 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 59], 00:26:36.611 | 30.00th=[ 61], 40.00th=[ 61], 50.00th=[ 69], 60.00th=[ 72], 00:26:36.611 | 70.00th=[ 72], 80.00th=[ 84], 90.00th=[ 95], 95.00th=[ 108], 00:26:36.611 | 99.00th=[ 140], 99.50th=[ 144], 99.90th=[ 171], 99.95th=[ 171], 00:26:36.611 | 99.99th=[ 171] 00:26:36.611 bw ( KiB/s): min= 640, max= 1080, per=3.88%, avg=919.90, stdev=110.20, samples=20 00:26:36.611 iops : min= 160, max= 270, avg=229.95, stdev=27.55, samples=20 00:26:36.611 lat (msec) : 50=16.36%, 100=78.24%, 250=5.40% 00:26:36.611 cpu : usr=32.54%, sys=0.52%, ctx=911, majf=0, minf=9 00:26:36.611 IO depths : 1=1.6%, 2=3.9%, 4=13.0%, 8=69.9%, 16=11.6%, 32=0.0%, >=64=0.0% 00:26:36.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.611 complete : 0=0.0%, 4=90.8%, 8=4.2%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.611 issued rwts: total=2316,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.611 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:36.611 filename1: (groupid=0, jobs=1): err= 0: pid=102512: Sun Dec 15 19:47:21 2024 00:26:36.611 read: IOPS=283, BW=1135KiB/s (1162kB/s)(11.1MiB/10058msec) 00:26:36.611 slat (usec): min=4, max=8033, avg=15.28, stdev=168.00 00:26:36.611 clat (msec): min=9, max=135, avg=56.15, stdev=17.10 00:26:36.611 lat (msec): min=9, max=135, avg=56.16, stdev=17.11 00:26:36.611 clat percentiles (msec): 00:26:36.611 | 1.00th=[ 13], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 43], 00:26:36.611 | 30.00th=[ 47], 40.00th=[ 48], 50.00th=[ 55], 60.00th=[ 60], 00:26:36.611 | 70.00th=[ 63], 80.00th=[ 70], 90.00th=[ 81], 95.00th=[ 85], 00:26:36.611 | 99.00th=[ 112], 99.50th=[ 113], 99.90th=[ 136], 99.95th=[ 136], 00:26:36.611 | 99.99th=[ 136] 00:26:36.611 bw ( KiB/s): min= 872, max= 1424, per=4.79%, avg=1135.30, stdev=168.18, samples=20 00:26:36.611 iops : min= 218, max= 356, avg=283.80, stdev=42.00, samples=20 00:26:36.611 lat (msec) : 10=0.56%, 20=1.12%, 50=44.25%, 100=52.10%, 250=1.96% 00:26:36.611 cpu : usr=41.23%, sys=0.71%, ctx=1203, majf=0, minf=9 00:26:36.611 IO depths : 1=1.1%, 2=2.3%, 4=9.5%, 8=74.6%, 16=12.6%, 32=0.0%, >=64=0.0% 00:26:36.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.611 complete : 0=0.0%, 4=90.0%, 8=5.5%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.611 issued rwts: total=2854,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.611 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:36.611 filename2: (groupid=0, jobs=1): err= 0: pid=102513: Sun Dec 15 19:47:21 2024 00:26:36.611 read: IOPS=247, BW=990KiB/s (1013kB/s)(9944KiB/10048msec) 00:26:36.611 slat (usec): min=4, max=8037, avg=16.20, stdev=161.13 00:26:36.611 clat (msec): min=26, max=139, avg=64.44, stdev=18.65 00:26:36.611 lat (msec): min=26, max=139, avg=64.46, stdev=18.65 00:26:36.611 clat percentiles (msec): 00:26:36.611 | 1.00th=[ 34], 5.00th=[ 40], 10.00th=[ 45], 20.00th=[ 48], 00:26:36.611 | 30.00th=[ 53], 40.00th=[ 60], 50.00th=[ 62], 60.00th=[ 66], 00:26:36.611 | 70.00th=[ 72], 80.00th=[ 77], 90.00th=[ 90], 95.00th=[ 103], 00:26:36.611 | 99.00th=[ 113], 99.50th=[ 129], 99.90th=[ 140], 99.95th=[ 140], 00:26:36.612 | 99.99th=[ 140] 00:26:36.612 bw ( KiB/s): min= 696, max= 1200, per=4.18%, avg=990.40, stdev=146.40, samples=20 00:26:36.612 iops : min= 174, max= 300, avg=247.60, stdev=36.60, samples=20 00:26:36.612 lat (msec) : 50=28.04%, 100=66.53%, 250=5.43% 00:26:36.612 cpu : usr=37.00%, sys=0.58%, ctx=1053, majf=0, minf=9 00:26:36.612 IO depths : 1=1.8%, 2=3.7%, 4=11.8%, 8=71.1%, 16=11.6%, 32=0.0%, >=64=0.0% 00:26:36.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.612 complete : 0=0.0%, 4=90.2%, 8=5.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.612 issued rwts: total=2486,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:36.612 filename2: (groupid=0, jobs=1): err= 0: pid=102514: Sun Dec 15 19:47:21 2024 00:26:36.612 read: IOPS=252, BW=1009KiB/s (1033kB/s)(9.90MiB/10044msec) 00:26:36.612 slat (usec): min=4, max=8036, avg=26.87, stdev=327.89 00:26:36.612 clat (msec): min=33, max=130, avg=63.24, stdev=16.70 00:26:36.612 lat (msec): min=33, max=130, avg=63.26, stdev=16.71 00:26:36.612 clat percentiles (msec): 00:26:36.612 | 1.00th=[ 35], 5.00th=[ 38], 10.00th=[ 43], 20.00th=[ 48], 00:26:36.612 | 30.00th=[ 54], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 67], 00:26:36.612 | 70.00th=[ 71], 80.00th=[ 77], 90.00th=[ 85], 95.00th=[ 89], 00:26:36.612 | 99.00th=[ 118], 99.50th=[ 123], 99.90th=[ 131], 99.95th=[ 131], 00:26:36.612 | 99.99th=[ 131] 00:26:36.612 bw ( KiB/s): min= 816, max= 1200, per=4.24%, avg=1006.70, stdev=128.79, samples=20 00:26:36.612 iops : min= 204, max= 300, avg=251.65, stdev=32.17, samples=20 00:26:36.612 lat (msec) : 50=24.94%, 100=72.10%, 250=2.96% 00:26:36.612 cpu : usr=40.95%, sys=0.46%, ctx=1162, majf=0, minf=9 00:26:36.612 IO depths : 1=1.3%, 2=2.9%, 4=10.7%, 8=73.1%, 16=12.0%, 32=0.0%, >=64=0.0% 00:26:36.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.612 complete : 0=0.0%, 4=90.0%, 8=5.2%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.612 issued rwts: total=2534,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:36.612 filename2: (groupid=0, jobs=1): err= 0: pid=102515: Sun Dec 15 19:47:21 2024 00:26:36.612 read: IOPS=229, BW=917KiB/s (939kB/s)(9196KiB/10032msec) 00:26:36.612 slat (usec): min=4, max=6030, avg=15.27, stdev=125.73 00:26:36.612 clat (msec): min=31, max=159, avg=69.62, stdev=19.83 00:26:36.612 lat (msec): min=31, max=159, avg=69.64, stdev=19.83 00:26:36.612 clat percentiles (msec): 00:26:36.612 | 1.00th=[ 35], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 55], 00:26:36.612 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 66], 60.00th=[ 71], 00:26:36.612 | 70.00th=[ 78], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 107], 00:26:36.612 | 99.00th=[ 128], 99.50th=[ 132], 99.90th=[ 161], 99.95th=[ 161], 00:26:36.612 | 99.99th=[ 161] 00:26:36.612 bw ( KiB/s): min= 640, max= 1200, per=3.85%, avg=913.10, stdev=138.07, samples=20 00:26:36.612 iops : min= 160, max= 300, avg=228.25, stdev=34.50, samples=20 00:26:36.612 lat (msec) : 50=16.62%, 100=77.03%, 250=6.35% 00:26:36.612 cpu : usr=41.91%, sys=0.52%, ctx=1165, majf=0, minf=9 00:26:36.612 IO depths : 1=2.1%, 2=4.9%, 4=14.5%, 8=67.4%, 16=11.1%, 32=0.0%, >=64=0.0% 00:26:36.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.612 complete : 0=0.0%, 4=91.2%, 8=3.8%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.612 issued rwts: total=2299,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:36.612 filename2: (groupid=0, jobs=1): err= 0: pid=102516: Sun Dec 15 19:47:21 2024 00:26:36.612 read: IOPS=255, BW=1024KiB/s (1049kB/s)(10.1MiB/10055msec) 00:26:36.612 slat (usec): min=4, max=8033, avg=15.39, stdev=158.34 00:26:36.612 clat (msec): min=15, max=159, avg=62.23, stdev=20.52 00:26:36.612 lat (msec): min=15, max=159, avg=62.24, stdev=20.53 00:26:36.612 clat percentiles (msec): 00:26:36.612 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 46], 00:26:36.612 | 30.00th=[ 48], 40.00th=[ 56], 50.00th=[ 61], 60.00th=[ 65], 00:26:36.612 | 70.00th=[ 71], 80.00th=[ 77], 90.00th=[ 88], 95.00th=[ 100], 00:26:36.612 | 99.00th=[ 124], 99.50th=[ 132], 99.90th=[ 161], 99.95th=[ 161], 00:26:36.612 | 99.99th=[ 161] 00:26:36.612 bw ( KiB/s): min= 720, max= 1360, per=4.31%, avg=1023.00, stdev=170.20, samples=20 00:26:36.612 iops : min= 180, max= 340, avg=255.75, stdev=42.55, samples=20 00:26:36.612 lat (msec) : 20=0.62%, 50=34.42%, 100=60.18%, 250=4.78% 00:26:36.612 cpu : usr=39.38%, sys=0.59%, ctx=1250, majf=0, minf=9 00:26:36.612 IO depths : 1=1.1%, 2=2.4%, 4=9.4%, 8=74.6%, 16=12.4%, 32=0.0%, >=64=0.0% 00:26:36.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.612 complete : 0=0.0%, 4=89.9%, 8=5.5%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.612 issued rwts: total=2574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:36.612 filename2: (groupid=0, jobs=1): err= 0: pid=102517: Sun Dec 15 19:47:21 2024 00:26:36.612 read: IOPS=230, BW=924KiB/s (946kB/s)(9276KiB/10039msec) 00:26:36.612 slat (usec): min=3, max=8029, avg=25.36, stdev=279.15 00:26:36.612 clat (msec): min=26, max=139, avg=69.09, stdev=17.74 00:26:36.612 lat (msec): min=26, max=139, avg=69.11, stdev=17.74 00:26:36.612 clat percentiles (msec): 00:26:36.612 | 1.00th=[ 33], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 57], 00:26:36.612 | 30.00th=[ 60], 40.00th=[ 62], 50.00th=[ 67], 60.00th=[ 72], 00:26:36.612 | 70.00th=[ 79], 80.00th=[ 85], 90.00th=[ 93], 95.00th=[ 101], 00:26:36.612 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 140], 99.95th=[ 140], 00:26:36.612 | 99.99th=[ 140] 00:26:36.612 bw ( KiB/s): min= 656, max= 1136, per=3.88%, avg=921.10, stdev=117.06, samples=20 00:26:36.612 iops : min= 164, max= 284, avg=230.25, stdev=29.27, samples=20 00:26:36.612 lat (msec) : 50=15.61%, 100=78.83%, 250=5.56% 00:26:36.612 cpu : usr=34.80%, sys=0.50%, ctx=1004, majf=0, minf=9 00:26:36.612 IO depths : 1=1.9%, 2=4.9%, 4=14.5%, 8=67.4%, 16=11.3%, 32=0.0%, >=64=0.0% 00:26:36.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.612 complete : 0=0.0%, 4=91.3%, 8=3.7%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.612 issued rwts: total=2319,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:36.612 filename2: (groupid=0, jobs=1): err= 0: pid=102518: Sun Dec 15 19:47:21 2024 00:26:36.612 read: IOPS=235, BW=942KiB/s (965kB/s)(9464KiB/10047msec) 00:26:36.612 slat (usec): min=3, max=8032, avg=22.84, stdev=247.09 00:26:36.612 clat (msec): min=34, max=131, avg=67.76, stdev=17.33 00:26:36.612 lat (msec): min=34, max=131, avg=67.78, stdev=17.32 00:26:36.612 clat percentiles (msec): 00:26:36.612 | 1.00th=[ 36], 5.00th=[ 39], 10.00th=[ 46], 20.00th=[ 53], 00:26:36.612 | 30.00th=[ 61], 40.00th=[ 62], 50.00th=[ 68], 60.00th=[ 72], 00:26:36.612 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 93], 95.00th=[ 96], 00:26:36.612 | 99.00th=[ 116], 99.50th=[ 120], 99.90th=[ 132], 99.95th=[ 132], 00:26:36.612 | 99.99th=[ 132] 00:26:36.612 bw ( KiB/s): min= 720, max= 1248, per=3.96%, avg=939.65, stdev=133.30, samples=20 00:26:36.612 iops : min= 180, max= 312, avg=234.90, stdev=33.32, samples=20 00:26:36.612 lat (msec) : 50=18.47%, 100=78.61%, 250=2.92% 00:26:36.612 cpu : usr=36.07%, sys=0.48%, ctx=1137, majf=0, minf=9 00:26:36.612 IO depths : 1=2.3%, 2=5.1%, 4=14.7%, 8=67.1%, 16=10.8%, 32=0.0%, >=64=0.0% 00:26:36.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.612 complete : 0=0.0%, 4=91.2%, 8=3.6%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.612 issued rwts: total=2366,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:36.612 filename2: (groupid=0, jobs=1): err= 0: pid=102519: Sun Dec 15 19:47:21 2024 00:26:36.612 read: IOPS=227, BW=909KiB/s (931kB/s)(9120KiB/10036msec) 00:26:36.612 slat (usec): min=4, max=8018, avg=23.28, stdev=290.34 00:26:36.612 clat (msec): min=30, max=162, avg=70.22, stdev=19.36 00:26:36.612 lat (msec): min=30, max=162, avg=70.24, stdev=19.35 00:26:36.612 clat percentiles (msec): 00:26:36.612 | 1.00th=[ 36], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 59], 00:26:36.612 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 69], 60.00th=[ 72], 00:26:36.612 | 70.00th=[ 77], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 108], 00:26:36.612 | 99.00th=[ 125], 99.50th=[ 131], 99.90th=[ 148], 99.95th=[ 148], 00:26:36.612 | 99.99th=[ 163] 00:26:36.612 bw ( KiB/s): min= 600, max= 1149, per=3.82%, avg=905.50, stdev=128.44, samples=20 00:26:36.612 iops : min= 150, max= 287, avg=226.35, stdev=32.08, samples=20 00:26:36.612 lat (msec) : 50=16.49%, 100=76.10%, 250=7.41% 00:26:36.612 cpu : usr=32.39%, sys=0.46%, ctx=900, majf=0, minf=9 00:26:36.612 IO depths : 1=1.3%, 2=3.4%, 4=11.7%, 8=71.0%, 16=12.6%, 32=0.0%, >=64=0.0% 00:26:36.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.612 complete : 0=0.0%, 4=90.5%, 8=5.1%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.612 issued rwts: total=2280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:36.612 filename2: (groupid=0, jobs=1): err= 0: pid=102520: Sun Dec 15 19:47:21 2024 00:26:36.612 read: IOPS=227, BW=912KiB/s (934kB/s)(9144KiB/10028msec) 00:26:36.612 slat (usec): min=4, max=8028, avg=28.20, stdev=345.28 00:26:36.612 clat (msec): min=28, max=167, avg=70.00, stdev=19.11 00:26:36.612 lat (msec): min=28, max=167, avg=70.02, stdev=19.11 00:26:36.612 clat percentiles (msec): 00:26:36.612 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 58], 00:26:36.612 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 69], 60.00th=[ 72], 00:26:36.612 | 70.00th=[ 75], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 108], 00:26:36.612 | 99.00th=[ 132], 99.50th=[ 144], 99.90th=[ 169], 99.95th=[ 169], 00:26:36.612 | 99.99th=[ 169] 00:26:36.612 bw ( KiB/s): min= 640, max= 1208, per=3.83%, avg=908.05, stdev=153.08, samples=20 00:26:36.612 iops : min= 160, max= 302, avg=227.00, stdev=38.27, samples=20 00:26:36.612 lat (msec) : 50=13.95%, 100=80.45%, 250=5.60% 00:26:36.612 cpu : usr=36.08%, sys=0.53%, ctx=975, majf=0, minf=9 00:26:36.612 IO depths : 1=1.5%, 2=3.6%, 4=12.3%, 8=70.5%, 16=12.1%, 32=0.0%, >=64=0.0% 00:26:36.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.612 complete : 0=0.0%, 4=90.4%, 8=5.0%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.612 issued rwts: total=2286,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:36.612 00:26:36.612 Run status group 0 (all jobs): 00:26:36.612 READ: bw=23.2MiB/s (24.3MB/s), 884KiB/s-1215KiB/s (906kB/s-1244kB/s), io=233MiB (245MB), run=10015-10072msec 00:26:36.612 19:47:21 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:36.612 19:47:21 -- target/dif.sh@43 -- # local sub 00:26:36.612 19:47:21 -- target/dif.sh@45 -- # for sub in "$@" 00:26:36.612 19:47:21 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:36.612 19:47:21 -- target/dif.sh@36 -- # local sub_id=0 00:26:36.612 19:47:21 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:36.613 19:47:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.613 19:47:21 -- common/autotest_common.sh@10 -- # set +x 00:26:36.613 19:47:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.613 19:47:21 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:36.613 19:47:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.613 19:47:21 -- common/autotest_common.sh@10 -- # set +x 00:26:36.613 19:47:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.613 19:47:21 -- target/dif.sh@45 -- # for sub in "$@" 00:26:36.613 19:47:21 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:36.613 19:47:21 -- target/dif.sh@36 -- # local sub_id=1 00:26:36.613 19:47:21 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:36.613 19:47:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.613 19:47:21 -- common/autotest_common.sh@10 -- # set +x 00:26:36.613 19:47:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.613 19:47:21 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:36.613 19:47:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.613 19:47:21 -- common/autotest_common.sh@10 -- # set +x 00:26:36.613 19:47:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.613 19:47:21 -- target/dif.sh@45 -- # for sub in "$@" 00:26:36.613 19:47:21 -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:36.613 19:47:21 -- target/dif.sh@36 -- # local sub_id=2 00:26:36.613 19:47:21 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:36.613 19:47:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.613 19:47:21 -- common/autotest_common.sh@10 -- # set +x 00:26:36.613 19:47:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.613 19:47:21 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:36.613 19:47:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.613 19:47:21 -- common/autotest_common.sh@10 -- # set +x 00:26:36.613 19:47:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.613 19:47:21 -- target/dif.sh@115 -- # NULL_DIF=1 00:26:36.613 19:47:21 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:36.613 19:47:21 -- target/dif.sh@115 -- # numjobs=2 00:26:36.613 19:47:21 -- target/dif.sh@115 -- # iodepth=8 00:26:36.613 19:47:21 -- target/dif.sh@115 -- # runtime=5 00:26:36.613 19:47:21 -- target/dif.sh@115 -- # files=1 00:26:36.613 19:47:21 -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:36.613 19:47:21 -- target/dif.sh@28 -- # local sub 00:26:36.613 19:47:21 -- target/dif.sh@30 -- # for sub in "$@" 00:26:36.613 19:47:21 -- target/dif.sh@31 -- # create_subsystem 0 00:26:36.613 19:47:21 -- target/dif.sh@18 -- # local sub_id=0 00:26:36.613 19:47:21 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:36.613 19:47:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.613 19:47:21 -- common/autotest_common.sh@10 -- # set +x 00:26:36.613 bdev_null0 00:26:36.613 19:47:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.613 19:47:21 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:36.613 19:47:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.613 19:47:21 -- common/autotest_common.sh@10 -- # set +x 00:26:36.613 19:47:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.613 19:47:21 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:36.613 19:47:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.613 19:47:21 -- common/autotest_common.sh@10 -- # set +x 00:26:36.613 19:47:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.613 19:47:21 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:36.613 19:47:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.613 19:47:21 -- common/autotest_common.sh@10 -- # set +x 00:26:36.613 [2024-12-15 19:47:21.670057] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:36.613 19:47:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.613 19:47:21 -- target/dif.sh@30 -- # for sub in "$@" 00:26:36.613 19:47:21 -- target/dif.sh@31 -- # create_subsystem 1 00:26:36.613 19:47:21 -- target/dif.sh@18 -- # local sub_id=1 00:26:36.613 19:47:21 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:36.613 19:47:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.613 19:47:21 -- common/autotest_common.sh@10 -- # set +x 00:26:36.613 bdev_null1 00:26:36.613 19:47:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.613 19:47:21 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:36.613 19:47:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.613 19:47:21 -- common/autotest_common.sh@10 -- # set +x 00:26:36.613 19:47:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.613 19:47:21 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:36.613 19:47:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.613 19:47:21 -- common/autotest_common.sh@10 -- # set +x 00:26:36.613 19:47:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.613 19:47:21 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:36.613 19:47:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.613 19:47:21 -- common/autotest_common.sh@10 -- # set +x 00:26:36.613 19:47:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.613 19:47:21 -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:36.613 19:47:21 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:36.613 19:47:21 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:36.613 19:47:21 -- nvmf/common.sh@520 -- # config=() 00:26:36.613 19:47:21 -- nvmf/common.sh@520 -- # local subsystem config 00:26:36.613 19:47:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:36.613 19:47:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:36.613 { 00:26:36.613 "params": { 00:26:36.613 "name": "Nvme$subsystem", 00:26:36.613 "trtype": "$TEST_TRANSPORT", 00:26:36.613 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.613 "adrfam": "ipv4", 00:26:36.613 "trsvcid": "$NVMF_PORT", 00:26:36.613 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.613 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.613 "hdgst": ${hdgst:-false}, 00:26:36.613 "ddgst": ${ddgst:-false} 00:26:36.613 }, 00:26:36.613 "method": "bdev_nvme_attach_controller" 00:26:36.613 } 00:26:36.613 EOF 00:26:36.613 )") 00:26:36.613 19:47:21 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:36.613 19:47:21 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:36.613 19:47:21 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:36.613 19:47:21 -- target/dif.sh@82 -- # gen_fio_conf 00:26:36.613 19:47:21 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:36.613 19:47:21 -- nvmf/common.sh@542 -- # cat 00:26:36.613 19:47:21 -- target/dif.sh@54 -- # local file 00:26:36.613 19:47:21 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:36.613 19:47:21 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:36.613 19:47:21 -- target/dif.sh@56 -- # cat 00:26:36.613 19:47:21 -- common/autotest_common.sh@1330 -- # shift 00:26:36.613 19:47:21 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:36.613 19:47:21 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:36.613 19:47:21 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:36.613 19:47:21 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:36.613 19:47:21 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:36.613 19:47:21 -- target/dif.sh@72 -- # (( file <= files )) 00:26:36.613 19:47:21 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:36.613 19:47:21 -- target/dif.sh@73 -- # cat 00:26:36.613 19:47:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:36.613 19:47:21 -- target/dif.sh@72 -- # (( file++ )) 00:26:36.613 19:47:21 -- target/dif.sh@72 -- # (( file <= files )) 00:26:36.613 19:47:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:36.613 { 00:26:36.613 "params": { 00:26:36.613 "name": "Nvme$subsystem", 00:26:36.613 "trtype": "$TEST_TRANSPORT", 00:26:36.613 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.613 "adrfam": "ipv4", 00:26:36.613 "trsvcid": "$NVMF_PORT", 00:26:36.614 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.614 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.614 "hdgst": ${hdgst:-false}, 00:26:36.614 "ddgst": ${ddgst:-false} 00:26:36.614 }, 00:26:36.614 "method": "bdev_nvme_attach_controller" 00:26:36.614 } 00:26:36.614 EOF 00:26:36.614 )") 00:26:36.614 19:47:21 -- nvmf/common.sh@542 -- # cat 00:26:36.614 19:47:21 -- nvmf/common.sh@544 -- # jq . 00:26:36.614 19:47:21 -- nvmf/common.sh@545 -- # IFS=, 00:26:36.614 19:47:21 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:36.614 "params": { 00:26:36.614 "name": "Nvme0", 00:26:36.614 "trtype": "tcp", 00:26:36.614 "traddr": "10.0.0.2", 00:26:36.614 "adrfam": "ipv4", 00:26:36.614 "trsvcid": "4420", 00:26:36.614 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:36.614 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:36.614 "hdgst": false, 00:26:36.614 "ddgst": false 00:26:36.614 }, 00:26:36.614 "method": "bdev_nvme_attach_controller" 00:26:36.614 },{ 00:26:36.614 "params": { 00:26:36.614 "name": "Nvme1", 00:26:36.614 "trtype": "tcp", 00:26:36.614 "traddr": "10.0.0.2", 00:26:36.614 "adrfam": "ipv4", 00:26:36.614 "trsvcid": "4420", 00:26:36.614 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:36.614 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:36.614 "hdgst": false, 00:26:36.614 "ddgst": false 00:26:36.614 }, 00:26:36.614 "method": "bdev_nvme_attach_controller" 00:26:36.614 }' 00:26:36.614 19:47:21 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:36.614 19:47:21 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:36.614 19:47:21 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:36.614 19:47:21 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:36.614 19:47:21 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:36.614 19:47:21 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:36.614 19:47:21 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:36.614 19:47:21 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:36.614 19:47:21 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:36.614 19:47:21 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:36.614 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:36.614 ... 00:26:36.614 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:36.614 ... 00:26:36.614 fio-3.35 00:26:36.614 Starting 4 threads 00:26:36.614 [2024-12-15 19:47:22.446984] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:36.614 [2024-12-15 19:47:22.447046] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:40.793 00:26:40.793 filename0: (groupid=0, jobs=1): err= 0: pid=102656: Sun Dec 15 19:47:27 2024 00:26:40.793 read: IOPS=2139, BW=16.7MiB/s (17.5MB/s)(83.6MiB/5001msec) 00:26:40.793 slat (nsec): min=6729, max=58812, avg=8302.71, stdev=3241.17 00:26:40.793 clat (usec): min=620, max=4469, avg=3699.02, stdev=328.29 00:26:40.793 lat (usec): min=627, max=4477, avg=3707.33, stdev=328.12 00:26:40.793 clat percentiles (usec): 00:26:40.793 | 1.00th=[ 1876], 5.00th=[ 3523], 10.00th=[ 3589], 20.00th=[ 3654], 00:26:40.793 | 30.00th=[ 3687], 40.00th=[ 3720], 50.00th=[ 3720], 60.00th=[ 3752], 00:26:40.793 | 70.00th=[ 3785], 80.00th=[ 3818], 90.00th=[ 3884], 95.00th=[ 3949], 00:26:40.793 | 99.00th=[ 4146], 99.50th=[ 4228], 99.90th=[ 4293], 99.95th=[ 4359], 00:26:40.793 | 99.99th=[ 4424] 00:26:40.793 bw ( KiB/s): min=16768, max=18176, per=25.36%, avg=17191.11, stdev=531.09, samples=9 00:26:40.793 iops : min= 2096, max= 2272, avg=2148.89, stdev=66.39, samples=9 00:26:40.793 lat (usec) : 750=0.08%, 1000=0.01% 00:26:40.793 lat (msec) : 2=1.29%, 4=95.51%, 10=3.10% 00:26:40.793 cpu : usr=93.74%, sys=5.06%, ctx=7, majf=0, minf=0 00:26:40.793 IO depths : 1=8.6%, 2=22.2%, 4=52.5%, 8=16.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:40.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.793 complete : 0=0.0%, 4=89.6%, 8=10.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.793 issued rwts: total=10702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.793 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:40.793 filename0: (groupid=0, jobs=1): err= 0: pid=102657: Sun Dec 15 19:47:27 2024 00:26:40.793 read: IOPS=2109, BW=16.5MiB/s (17.3MB/s)(82.4MiB/5001msec) 00:26:40.793 slat (nsec): min=7026, max=85612, avg=14240.67, stdev=4808.76 00:26:40.793 clat (usec): min=1936, max=5664, avg=3723.45, stdev=158.78 00:26:40.793 lat (usec): min=1949, max=5690, avg=3737.69, stdev=158.62 00:26:40.793 clat percentiles (usec): 00:26:40.793 | 1.00th=[ 3458], 5.00th=[ 3523], 10.00th=[ 3556], 20.00th=[ 3621], 00:26:40.793 | 30.00th=[ 3654], 40.00th=[ 3687], 50.00th=[ 3720], 60.00th=[ 3752], 00:26:40.793 | 70.00th=[ 3785], 80.00th=[ 3818], 90.00th=[ 3884], 95.00th=[ 3949], 00:26:40.793 | 99.00th=[ 4146], 99.50th=[ 4293], 99.90th=[ 4752], 99.95th=[ 5604], 00:26:40.793 | 99.99th=[ 5669] 00:26:40.793 bw ( KiB/s): min=16640, max=17280, per=24.95%, avg=16910.22, stdev=196.68, samples=9 00:26:40.793 iops : min= 2080, max= 2160, avg=2113.78, stdev=24.59, samples=9 00:26:40.793 lat (msec) : 2=0.05%, 4=97.41%, 10=2.54% 00:26:40.793 cpu : usr=94.64%, sys=4.12%, ctx=3, majf=0, minf=9 00:26:40.793 IO depths : 1=12.1%, 2=25.0%, 4=50.0%, 8=12.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:40.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.793 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.793 issued rwts: total=10552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.794 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:40.794 filename1: (groupid=0, jobs=1): err= 0: pid=102658: Sun Dec 15 19:47:27 2024 00:26:40.794 read: IOPS=2111, BW=16.5MiB/s (17.3MB/s)(82.5MiB/5002msec) 00:26:40.794 slat (nsec): min=6673, max=52454, avg=10362.17, stdev=4711.72 00:26:40.794 clat (usec): min=1988, max=5028, avg=3749.93, stdev=170.55 00:26:40.794 lat (usec): min=1995, max=5045, avg=3760.30, stdev=170.06 00:26:40.794 clat percentiles (usec): 00:26:40.794 | 1.00th=[ 3228], 5.00th=[ 3523], 10.00th=[ 3589], 20.00th=[ 3654], 00:26:40.794 | 30.00th=[ 3687], 40.00th=[ 3720], 50.00th=[ 3752], 60.00th=[ 3785], 00:26:40.794 | 70.00th=[ 3818], 80.00th=[ 3851], 90.00th=[ 3916], 95.00th=[ 3982], 00:26:40.794 | 99.00th=[ 4293], 99.50th=[ 4424], 99.90th=[ 4555], 99.95th=[ 4621], 00:26:40.794 | 99.99th=[ 5014] 00:26:40.794 bw ( KiB/s): min=16688, max=17280, per=24.98%, avg=16929.78, stdev=213.68, samples=9 00:26:40.794 iops : min= 2086, max= 2160, avg=2116.22, stdev=26.71, samples=9 00:26:40.794 lat (msec) : 2=0.03%, 4=95.99%, 10=3.99% 00:26:40.794 cpu : usr=94.52%, sys=4.30%, ctx=9, majf=0, minf=9 00:26:40.794 IO depths : 1=3.6%, 2=7.4%, 4=67.6%, 8=21.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:40.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.794 complete : 0=0.0%, 4=89.7%, 8=10.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.794 issued rwts: total=10563,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.794 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:40.794 filename1: (groupid=0, jobs=1): err= 0: pid=102659: Sun Dec 15 19:47:27 2024 00:26:40.794 read: IOPS=2111, BW=16.5MiB/s (17.3MB/s)(82.5MiB/5001msec) 00:26:40.794 slat (nsec): min=6325, max=60405, avg=14304.04, stdev=4917.37 00:26:40.794 clat (usec): min=1175, max=5874, avg=3717.16, stdev=173.95 00:26:40.794 lat (usec): min=1182, max=5881, avg=3731.46, stdev=174.24 00:26:40.794 clat percentiles (usec): 00:26:40.794 | 1.00th=[ 3458], 5.00th=[ 3523], 10.00th=[ 3556], 20.00th=[ 3621], 00:26:40.794 | 30.00th=[ 3654], 40.00th=[ 3687], 50.00th=[ 3720], 60.00th=[ 3720], 00:26:40.794 | 70.00th=[ 3752], 80.00th=[ 3818], 90.00th=[ 3884], 95.00th=[ 3949], 00:26:40.794 | 99.00th=[ 4113], 99.50th=[ 4228], 99.90th=[ 5473], 99.95th=[ 5604], 00:26:40.794 | 99.99th=[ 5866] 00:26:40.794 bw ( KiB/s): min=16640, max=17280, per=24.96%, avg=16914.00, stdev=202.16, samples=9 00:26:40.794 iops : min= 2080, max= 2160, avg=2114.22, stdev=25.23, samples=9 00:26:40.794 lat (msec) : 2=0.09%, 4=97.62%, 10=2.28% 00:26:40.794 cpu : usr=94.26%, sys=4.60%, ctx=4, majf=0, minf=9 00:26:40.794 IO depths : 1=12.0%, 2=25.0%, 4=50.0%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:40.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.794 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.794 issued rwts: total=10560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.794 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:40.794 00:26:40.794 Run status group 0 (all jobs): 00:26:40.794 READ: bw=66.2MiB/s (69.4MB/s), 16.5MiB/s-16.7MiB/s (17.3MB/s-17.5MB/s), io=331MiB (347MB), run=5001-5002msec 00:26:41.052 19:47:27 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:41.052 19:47:27 -- target/dif.sh@43 -- # local sub 00:26:41.052 19:47:27 -- target/dif.sh@45 -- # for sub in "$@" 00:26:41.052 19:47:27 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:41.052 19:47:27 -- target/dif.sh@36 -- # local sub_id=0 00:26:41.052 19:47:27 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:41.052 19:47:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.052 19:47:27 -- common/autotest_common.sh@10 -- # set +x 00:26:41.052 19:47:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.052 19:47:27 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:41.052 19:47:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.052 19:47:27 -- common/autotest_common.sh@10 -- # set +x 00:26:41.052 19:47:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.052 19:47:27 -- target/dif.sh@45 -- # for sub in "$@" 00:26:41.052 19:47:27 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:41.052 19:47:27 -- target/dif.sh@36 -- # local sub_id=1 00:26:41.052 19:47:27 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:41.052 19:47:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.052 19:47:27 -- common/autotest_common.sh@10 -- # set +x 00:26:41.052 19:47:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.052 19:47:27 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:41.052 19:47:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.052 19:47:27 -- common/autotest_common.sh@10 -- # set +x 00:26:41.052 19:47:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.052 ************************************ 00:26:41.052 END TEST fio_dif_rand_params 00:26:41.052 ************************************ 00:26:41.052 00:26:41.052 real 0m23.834s 00:26:41.052 user 2m7.552s 00:26:41.052 sys 0m3.960s 00:26:41.052 19:47:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:41.052 19:47:27 -- common/autotest_common.sh@10 -- # set +x 00:26:41.052 19:47:27 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:41.052 19:47:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:41.052 19:47:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:41.052 19:47:27 -- common/autotest_common.sh@10 -- # set +x 00:26:41.052 ************************************ 00:26:41.052 START TEST fio_dif_digest 00:26:41.052 ************************************ 00:26:41.052 19:47:27 -- common/autotest_common.sh@1114 -- # fio_dif_digest 00:26:41.052 19:47:27 -- target/dif.sh@123 -- # local NULL_DIF 00:26:41.052 19:47:27 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:41.052 19:47:27 -- target/dif.sh@125 -- # local hdgst ddgst 00:26:41.052 19:47:27 -- target/dif.sh@127 -- # NULL_DIF=3 00:26:41.052 19:47:27 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:41.052 19:47:27 -- target/dif.sh@127 -- # numjobs=3 00:26:41.052 19:47:27 -- target/dif.sh@127 -- # iodepth=3 00:26:41.052 19:47:27 -- target/dif.sh@127 -- # runtime=10 00:26:41.052 19:47:27 -- target/dif.sh@128 -- # hdgst=true 00:26:41.052 19:47:27 -- target/dif.sh@128 -- # ddgst=true 00:26:41.052 19:47:27 -- target/dif.sh@130 -- # create_subsystems 0 00:26:41.052 19:47:27 -- target/dif.sh@28 -- # local sub 00:26:41.052 19:47:27 -- target/dif.sh@30 -- # for sub in "$@" 00:26:41.052 19:47:27 -- target/dif.sh@31 -- # create_subsystem 0 00:26:41.052 19:47:27 -- target/dif.sh@18 -- # local sub_id=0 00:26:41.052 19:47:27 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:41.052 19:47:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.053 19:47:27 -- common/autotest_common.sh@10 -- # set +x 00:26:41.053 bdev_null0 00:26:41.053 19:47:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.053 19:47:27 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:41.053 19:47:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.053 19:47:27 -- common/autotest_common.sh@10 -- # set +x 00:26:41.311 19:47:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.311 19:47:27 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:41.311 19:47:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.311 19:47:27 -- common/autotest_common.sh@10 -- # set +x 00:26:41.311 19:47:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.311 19:47:27 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:41.311 19:47:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.311 19:47:27 -- common/autotest_common.sh@10 -- # set +x 00:26:41.311 [2024-12-15 19:47:27.963886] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:41.311 19:47:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.311 19:47:27 -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:41.311 19:47:27 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:41.311 19:47:27 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:41.311 19:47:27 -- nvmf/common.sh@520 -- # config=() 00:26:41.311 19:47:27 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:41.311 19:47:27 -- nvmf/common.sh@520 -- # local subsystem config 00:26:41.311 19:47:27 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:41.311 19:47:27 -- target/dif.sh@82 -- # gen_fio_conf 00:26:41.311 19:47:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:41.311 19:47:27 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:41.311 19:47:27 -- target/dif.sh@54 -- # local file 00:26:41.311 19:47:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:41.311 { 00:26:41.311 "params": { 00:26:41.311 "name": "Nvme$subsystem", 00:26:41.311 "trtype": "$TEST_TRANSPORT", 00:26:41.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:41.311 "adrfam": "ipv4", 00:26:41.311 "trsvcid": "$NVMF_PORT", 00:26:41.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:41.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:41.311 "hdgst": ${hdgst:-false}, 00:26:41.311 "ddgst": ${ddgst:-false} 00:26:41.311 }, 00:26:41.311 "method": "bdev_nvme_attach_controller" 00:26:41.311 } 00:26:41.311 EOF 00:26:41.311 )") 00:26:41.311 19:47:27 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:41.311 19:47:27 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:41.311 19:47:27 -- target/dif.sh@56 -- # cat 00:26:41.311 19:47:27 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:41.311 19:47:27 -- common/autotest_common.sh@1330 -- # shift 00:26:41.311 19:47:27 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:41.311 19:47:27 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:41.311 19:47:27 -- nvmf/common.sh@542 -- # cat 00:26:41.311 19:47:27 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:41.311 19:47:27 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:41.311 19:47:27 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:41.311 19:47:27 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:41.311 19:47:27 -- target/dif.sh@72 -- # (( file <= files )) 00:26:41.311 19:47:27 -- nvmf/common.sh@544 -- # jq . 00:26:41.311 19:47:27 -- nvmf/common.sh@545 -- # IFS=, 00:26:41.311 19:47:27 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:41.311 "params": { 00:26:41.311 "name": "Nvme0", 00:26:41.311 "trtype": "tcp", 00:26:41.311 "traddr": "10.0.0.2", 00:26:41.311 "adrfam": "ipv4", 00:26:41.311 "trsvcid": "4420", 00:26:41.311 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:41.311 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:41.311 "hdgst": true, 00:26:41.311 "ddgst": true 00:26:41.311 }, 00:26:41.311 "method": "bdev_nvme_attach_controller" 00:26:41.311 }' 00:26:41.311 19:47:28 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:41.311 19:47:28 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:41.311 19:47:28 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:41.311 19:47:28 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:41.311 19:47:28 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:41.311 19:47:28 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:41.311 19:47:28 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:41.311 19:47:28 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:41.311 19:47:28 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:41.311 19:47:28 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:41.311 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:41.311 ... 00:26:41.311 fio-3.35 00:26:41.311 Starting 3 threads 00:26:41.878 [2024-12-15 19:47:28.571045] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:41.878 [2024-12-15 19:47:28.571115] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:51.854 00:26:51.854 filename0: (groupid=0, jobs=1): err= 0: pid=102765: Sun Dec 15 19:47:38 2024 00:26:51.854 read: IOPS=237, BW=29.7MiB/s (31.2MB/s)(299MiB/10046msec) 00:26:51.854 slat (nsec): min=6450, max=81193, avg=14081.10, stdev=7123.10 00:26:51.854 clat (usec): min=5724, max=50276, avg=12578.50, stdev=1906.05 00:26:51.854 lat (usec): min=5734, max=50289, avg=12592.58, stdev=1906.83 00:26:51.854 clat percentiles (usec): 00:26:51.854 | 1.00th=[ 7308], 5.00th=[ 8586], 10.00th=[11076], 20.00th=[11731], 00:26:51.854 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12649], 60.00th=[13042], 00:26:51.854 | 70.00th=[13304], 80.00th=[13698], 90.00th=[14222], 95.00th=[14746], 00:26:51.854 | 99.00th=[15664], 99.50th=[16057], 99.90th=[18220], 99.95th=[47449], 00:26:51.854 | 99.99th=[50070] 00:26:51.854 bw ( KiB/s): min=28928, max=32512, per=33.91%, avg=30477.47, stdev=1127.16, samples=19 00:26:51.854 iops : min= 226, max= 254, avg=238.11, stdev= 8.81, samples=19 00:26:51.854 lat (msec) : 10=6.66%, 20=93.26%, 50=0.04%, 100=0.04% 00:26:51.854 cpu : usr=95.09%, sys=3.52%, ctx=12, majf=0, minf=9 00:26:51.854 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:51.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.854 issued rwts: total=2389,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.854 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:51.854 filename0: (groupid=0, jobs=1): err= 0: pid=102766: Sun Dec 15 19:47:38 2024 00:26:51.854 read: IOPS=206, BW=25.8MiB/s (27.1MB/s)(258MiB/10005msec) 00:26:51.854 slat (usec): min=6, max=102, avg=14.25, stdev= 7.37 00:26:51.854 clat (usec): min=7452, max=18890, avg=14499.62, stdev=1611.08 00:26:51.854 lat (usec): min=7464, max=18903, avg=14513.87, stdev=1612.34 00:26:51.854 clat percentiles (usec): 00:26:51.854 | 1.00th=[ 8717], 5.00th=[10290], 10.00th=[13435], 20.00th=[13829], 00:26:51.854 | 30.00th=[14222], 40.00th=[14484], 50.00th=[14746], 60.00th=[14877], 00:26:51.854 | 70.00th=[15139], 80.00th=[15533], 90.00th=[15926], 95.00th=[16450], 00:26:51.854 | 99.00th=[17433], 99.50th=[17695], 99.90th=[18744], 99.95th=[18744], 00:26:51.854 | 99.99th=[19006] 00:26:51.854 bw ( KiB/s): min=24576, max=29184, per=29.54%, avg=26545.89, stdev=1181.58, samples=19 00:26:51.854 iops : min= 192, max= 228, avg=207.37, stdev= 9.24, samples=19 00:26:51.854 lat (msec) : 10=4.89%, 20=95.11% 00:26:51.854 cpu : usr=94.87%, sys=3.84%, ctx=10, majf=0, minf=11 00:26:51.854 IO depths : 1=7.4%, 2=92.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:51.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.854 issued rwts: total=2067,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.854 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:51.854 filename0: (groupid=0, jobs=1): err= 0: pid=102767: Sun Dec 15 19:47:38 2024 00:26:51.854 read: IOPS=259, BW=32.4MiB/s (34.0MB/s)(325MiB/10008msec) 00:26:51.854 slat (nsec): min=6588, max=78058, avg=17205.99, stdev=7714.41 00:26:51.854 clat (usec): min=7496, max=54722, avg=11536.15, stdev=4447.55 00:26:51.854 lat (usec): min=7525, max=54731, avg=11553.36, stdev=4447.69 00:26:51.854 clat percentiles (usec): 00:26:51.854 | 1.00th=[ 8979], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[10290], 00:26:51.854 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11076], 60.00th=[11338], 00:26:51.854 | 70.00th=[11469], 80.00th=[11863], 90.00th=[12256], 95.00th=[12649], 00:26:51.854 | 99.00th=[50594], 99.50th=[52167], 99.90th=[53740], 99.95th=[53740], 00:26:51.854 | 99.99th=[54789] 00:26:51.854 bw ( KiB/s): min=25344, max=35840, per=36.97%, avg=33226.11, stdev=2806.39, samples=19 00:26:51.854 iops : min= 198, max= 280, avg=259.58, stdev=21.92, samples=19 00:26:51.854 lat (msec) : 10=11.36%, 20=87.49%, 50=0.04%, 100=1.12% 00:26:51.854 cpu : usr=93.64%, sys=4.56%, ctx=27, majf=0, minf=9 00:26:51.854 IO depths : 1=3.5%, 2=96.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:51.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.854 issued rwts: total=2597,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.854 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:51.854 00:26:51.854 Run status group 0 (all jobs): 00:26:51.854 READ: bw=87.8MiB/s (92.0MB/s), 25.8MiB/s-32.4MiB/s (27.1MB/s-34.0MB/s), io=882MiB (924MB), run=10005-10046msec 00:26:52.113 19:47:39 -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:52.113 19:47:39 -- target/dif.sh@43 -- # local sub 00:26:52.113 19:47:39 -- target/dif.sh@45 -- # for sub in "$@" 00:26:52.113 19:47:39 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:52.113 19:47:39 -- target/dif.sh@36 -- # local sub_id=0 00:26:52.113 19:47:39 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:52.113 19:47:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.113 19:47:39 -- common/autotest_common.sh@10 -- # set +x 00:26:52.372 19:47:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.372 19:47:39 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:52.372 19:47:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.372 19:47:39 -- common/autotest_common.sh@10 -- # set +x 00:26:52.372 19:47:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.372 00:26:52.372 real 0m11.090s 00:26:52.372 user 0m29.109s 00:26:52.372 sys 0m1.506s 00:26:52.372 19:47:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:52.372 19:47:39 -- common/autotest_common.sh@10 -- # set +x 00:26:52.372 ************************************ 00:26:52.372 END TEST fio_dif_digest 00:26:52.372 ************************************ 00:26:52.372 19:47:39 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:52.372 19:47:39 -- target/dif.sh@147 -- # nvmftestfini 00:26:52.372 19:47:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:52.372 19:47:39 -- nvmf/common.sh@116 -- # sync 00:26:52.372 19:47:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:52.372 19:47:39 -- nvmf/common.sh@119 -- # set +e 00:26:52.372 19:47:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:52.372 19:47:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:52.372 rmmod nvme_tcp 00:26:52.372 rmmod nvme_fabrics 00:26:52.372 rmmod nvme_keyring 00:26:52.372 19:47:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:52.372 19:47:39 -- nvmf/common.sh@123 -- # set -e 00:26:52.372 19:47:39 -- nvmf/common.sh@124 -- # return 0 00:26:52.372 19:47:39 -- nvmf/common.sh@477 -- # '[' -n 101997 ']' 00:26:52.372 19:47:39 -- nvmf/common.sh@478 -- # killprocess 101997 00:26:52.372 19:47:39 -- common/autotest_common.sh@936 -- # '[' -z 101997 ']' 00:26:52.372 19:47:39 -- common/autotest_common.sh@940 -- # kill -0 101997 00:26:52.372 19:47:39 -- common/autotest_common.sh@941 -- # uname 00:26:52.372 19:47:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:52.372 19:47:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 101997 00:26:52.372 killing process with pid 101997 00:26:52.372 19:47:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:52.372 19:47:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:52.372 19:47:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 101997' 00:26:52.372 19:47:39 -- common/autotest_common.sh@955 -- # kill 101997 00:26:52.372 19:47:39 -- common/autotest_common.sh@960 -- # wait 101997 00:26:52.631 19:47:39 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:26:52.631 19:47:39 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:53.199 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:53.199 Waiting for block devices as requested 00:26:53.199 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:53.199 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:26:53.199 19:47:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:53.199 19:47:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:53.199 19:47:40 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:53.199 19:47:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:53.199 19:47:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:53.199 19:47:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:53.199 19:47:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:53.199 19:47:40 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:53.199 00:26:53.199 real 1m0.610s 00:26:53.199 user 3m54.529s 00:26:53.199 sys 0m13.054s 00:26:53.199 ************************************ 00:26:53.199 END TEST nvmf_dif 00:26:53.199 ************************************ 00:26:53.199 19:47:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:53.199 19:47:40 -- common/autotest_common.sh@10 -- # set +x 00:26:53.458 19:47:40 -- spdk/autotest.sh@288 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:53.458 19:47:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:53.458 19:47:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:53.458 19:47:40 -- common/autotest_common.sh@10 -- # set +x 00:26:53.458 ************************************ 00:26:53.458 START TEST nvmf_abort_qd_sizes 00:26:53.458 ************************************ 00:26:53.458 19:47:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:53.458 * Looking for test storage... 00:26:53.458 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:53.458 19:47:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:53.458 19:47:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:53.458 19:47:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:53.458 19:47:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:53.458 19:47:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:53.458 19:47:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:53.458 19:47:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:53.458 19:47:40 -- scripts/common.sh@335 -- # IFS=.-: 00:26:53.458 19:47:40 -- scripts/common.sh@335 -- # read -ra ver1 00:26:53.458 19:47:40 -- scripts/common.sh@336 -- # IFS=.-: 00:26:53.458 19:47:40 -- scripts/common.sh@336 -- # read -ra ver2 00:26:53.458 19:47:40 -- scripts/common.sh@337 -- # local 'op=<' 00:26:53.458 19:47:40 -- scripts/common.sh@339 -- # ver1_l=2 00:26:53.458 19:47:40 -- scripts/common.sh@340 -- # ver2_l=1 00:26:53.458 19:47:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:53.458 19:47:40 -- scripts/common.sh@343 -- # case "$op" in 00:26:53.458 19:47:40 -- scripts/common.sh@344 -- # : 1 00:26:53.458 19:47:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:53.458 19:47:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:53.458 19:47:40 -- scripts/common.sh@364 -- # decimal 1 00:26:53.458 19:47:40 -- scripts/common.sh@352 -- # local d=1 00:26:53.458 19:47:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:53.458 19:47:40 -- scripts/common.sh@354 -- # echo 1 00:26:53.458 19:47:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:53.458 19:47:40 -- scripts/common.sh@365 -- # decimal 2 00:26:53.458 19:47:40 -- scripts/common.sh@352 -- # local d=2 00:26:53.458 19:47:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:53.458 19:47:40 -- scripts/common.sh@354 -- # echo 2 00:26:53.458 19:47:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:53.458 19:47:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:53.458 19:47:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:53.458 19:47:40 -- scripts/common.sh@367 -- # return 0 00:26:53.458 19:47:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:53.458 19:47:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:53.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:53.458 --rc genhtml_branch_coverage=1 00:26:53.458 --rc genhtml_function_coverage=1 00:26:53.458 --rc genhtml_legend=1 00:26:53.458 --rc geninfo_all_blocks=1 00:26:53.458 --rc geninfo_unexecuted_blocks=1 00:26:53.458 00:26:53.458 ' 00:26:53.458 19:47:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:53.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:53.458 --rc genhtml_branch_coverage=1 00:26:53.458 --rc genhtml_function_coverage=1 00:26:53.458 --rc genhtml_legend=1 00:26:53.458 --rc geninfo_all_blocks=1 00:26:53.458 --rc geninfo_unexecuted_blocks=1 00:26:53.458 00:26:53.458 ' 00:26:53.458 19:47:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:53.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:53.458 --rc genhtml_branch_coverage=1 00:26:53.458 --rc genhtml_function_coverage=1 00:26:53.458 --rc genhtml_legend=1 00:26:53.458 --rc geninfo_all_blocks=1 00:26:53.458 --rc geninfo_unexecuted_blocks=1 00:26:53.458 00:26:53.458 ' 00:26:53.458 19:47:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:53.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:53.458 --rc genhtml_branch_coverage=1 00:26:53.458 --rc genhtml_function_coverage=1 00:26:53.458 --rc genhtml_legend=1 00:26:53.458 --rc geninfo_all_blocks=1 00:26:53.458 --rc geninfo_unexecuted_blocks=1 00:26:53.458 00:26:53.458 ' 00:26:53.458 19:47:40 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:53.458 19:47:40 -- nvmf/common.sh@7 -- # uname -s 00:26:53.458 19:47:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:53.458 19:47:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:53.458 19:47:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:53.458 19:47:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:53.458 19:47:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:53.458 19:47:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:53.458 19:47:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:53.459 19:47:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:53.459 19:47:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:53.459 19:47:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:53.459 19:47:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:26:53.459 19:47:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 00:26:53.459 19:47:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:53.459 19:47:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:53.459 19:47:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:53.459 19:47:40 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:53.459 19:47:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:53.459 19:47:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:53.459 19:47:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:53.459 19:47:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.459 19:47:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.459 19:47:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.459 19:47:40 -- paths/export.sh@5 -- # export PATH 00:26:53.459 19:47:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.459 19:47:40 -- nvmf/common.sh@46 -- # : 0 00:26:53.459 19:47:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:53.459 19:47:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:53.459 19:47:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:53.459 19:47:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:53.459 19:47:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:53.459 19:47:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:53.459 19:47:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:53.459 19:47:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:53.459 19:47:40 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:26:53.459 19:47:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:53.459 19:47:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:53.459 19:47:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:53.459 19:47:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:53.459 19:47:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:53.459 19:47:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:53.459 19:47:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:53.459 19:47:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:53.459 19:47:40 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:26:53.459 19:47:40 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:26:53.459 19:47:40 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:26:53.459 19:47:40 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:26:53.459 19:47:40 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:26:53.459 19:47:40 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:26:53.459 19:47:40 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:53.459 19:47:40 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:53.459 19:47:40 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:53.459 19:47:40 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:26:53.459 19:47:40 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:53.459 19:47:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:53.459 19:47:40 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:53.459 19:47:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:53.459 19:47:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:53.459 19:47:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:53.459 19:47:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:53.459 19:47:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:53.459 19:47:40 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:26:53.459 19:47:40 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:26:53.718 Cannot find device "nvmf_tgt_br" 00:26:53.718 19:47:40 -- nvmf/common.sh@154 -- # true 00:26:53.718 19:47:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:26:53.718 Cannot find device "nvmf_tgt_br2" 00:26:53.718 19:47:40 -- nvmf/common.sh@155 -- # true 00:26:53.718 19:47:40 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:26:53.718 19:47:40 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:26:53.718 Cannot find device "nvmf_tgt_br" 00:26:53.718 19:47:40 -- nvmf/common.sh@157 -- # true 00:26:53.718 19:47:40 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:26:53.718 Cannot find device "nvmf_tgt_br2" 00:26:53.718 19:47:40 -- nvmf/common.sh@158 -- # true 00:26:53.718 19:47:40 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:26:53.718 19:47:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:26:53.718 19:47:40 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:53.718 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:53.718 19:47:40 -- nvmf/common.sh@161 -- # true 00:26:53.718 19:47:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:53.718 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:53.718 19:47:40 -- nvmf/common.sh@162 -- # true 00:26:53.718 19:47:40 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:26:53.718 19:47:40 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:53.718 19:47:40 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:53.718 19:47:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:53.718 19:47:40 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:53.718 19:47:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:53.718 19:47:40 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:53.718 19:47:40 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:53.718 19:47:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:53.718 19:47:40 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:26:53.718 19:47:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:26:53.718 19:47:40 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:26:53.718 19:47:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:26:53.718 19:47:40 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:53.718 19:47:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:53.718 19:47:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:53.718 19:47:40 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:26:53.718 19:47:40 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:26:53.718 19:47:40 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:26:53.718 19:47:40 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:53.718 19:47:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:53.718 19:47:40 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:53.977 19:47:40 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:53.977 19:47:40 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:26:53.977 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:53.977 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:26:53.977 00:26:53.977 --- 10.0.0.2 ping statistics --- 00:26:53.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.977 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:26:53.977 19:47:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:26:53.977 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:53.977 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:26:53.977 00:26:53.977 --- 10.0.0.3 ping statistics --- 00:26:53.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.977 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:26:53.977 19:47:40 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:53.977 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:53.977 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:26:53.977 00:26:53.977 --- 10.0.0.1 ping statistics --- 00:26:53.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.977 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:26:53.977 19:47:40 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:53.977 19:47:40 -- nvmf/common.sh@421 -- # return 0 00:26:53.977 19:47:40 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:26:53.977 19:47:40 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:54.545 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:54.545 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:26:54.804 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:26:54.804 19:47:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:54.804 19:47:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:54.804 19:47:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:54.804 19:47:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:54.804 19:47:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:54.804 19:47:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:54.804 19:47:41 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:26:54.804 19:47:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:54.804 19:47:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:54.804 19:47:41 -- common/autotest_common.sh@10 -- # set +x 00:26:54.804 19:47:41 -- nvmf/common.sh@469 -- # nvmfpid=103366 00:26:54.804 19:47:41 -- nvmf/common.sh@470 -- # waitforlisten 103366 00:26:54.804 19:47:41 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:54.804 19:47:41 -- common/autotest_common.sh@829 -- # '[' -z 103366 ']' 00:26:54.804 19:47:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:54.804 19:47:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:54.804 19:47:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:54.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:54.804 19:47:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:54.804 19:47:41 -- common/autotest_common.sh@10 -- # set +x 00:26:54.804 [2024-12-15 19:47:41.594023] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:54.804 [2024-12-15 19:47:41.594110] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:55.062 [2024-12-15 19:47:41.734748] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:55.062 [2024-12-15 19:47:41.829293] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:55.062 [2024-12-15 19:47:41.829467] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:55.063 [2024-12-15 19:47:41.829484] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:55.063 [2024-12-15 19:47:41.829495] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:55.063 [2024-12-15 19:47:41.829678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:55.063 [2024-12-15 19:47:41.829849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:55.063 [2024-12-15 19:47:41.830563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:55.063 [2024-12-15 19:47:41.830594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:55.999 19:47:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:55.999 19:47:42 -- common/autotest_common.sh@862 -- # return 0 00:26:55.999 19:47:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:55.999 19:47:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:55.999 19:47:42 -- common/autotest_common.sh@10 -- # set +x 00:26:55.999 19:47:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:55.999 19:47:42 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:55.999 19:47:42 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:26:55.999 19:47:42 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:26:55.999 19:47:42 -- scripts/common.sh@311 -- # local bdf bdfs 00:26:55.999 19:47:42 -- scripts/common.sh@312 -- # local nvmes 00:26:55.999 19:47:42 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:26:55.999 19:47:42 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:26:55.999 19:47:42 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:26:56.000 19:47:42 -- scripts/common.sh@297 -- # local bdf= 00:26:56.000 19:47:42 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:26:56.000 19:47:42 -- scripts/common.sh@232 -- # local class 00:26:56.000 19:47:42 -- scripts/common.sh@233 -- # local subclass 00:26:56.000 19:47:42 -- scripts/common.sh@234 -- # local progif 00:26:56.000 19:47:42 -- scripts/common.sh@235 -- # printf %02x 1 00:26:56.000 19:47:42 -- scripts/common.sh@235 -- # class=01 00:26:56.000 19:47:42 -- scripts/common.sh@236 -- # printf %02x 8 00:26:56.000 19:47:42 -- scripts/common.sh@236 -- # subclass=08 00:26:56.000 19:47:42 -- scripts/common.sh@237 -- # printf %02x 2 00:26:56.000 19:47:42 -- scripts/common.sh@237 -- # progif=02 00:26:56.000 19:47:42 -- scripts/common.sh@239 -- # hash lspci 00:26:56.000 19:47:42 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:26:56.000 19:47:42 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:26:56.000 19:47:42 -- scripts/common.sh@242 -- # grep -i -- -p02 00:26:56.000 19:47:42 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:26:56.000 19:47:42 -- scripts/common.sh@244 -- # tr -d '"' 00:26:56.000 19:47:42 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:56.000 19:47:42 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:26:56.000 19:47:42 -- scripts/common.sh@15 -- # local i 00:26:56.000 19:47:42 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:26:56.000 19:47:42 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:56.000 19:47:42 -- scripts/common.sh@24 -- # return 0 00:26:56.000 19:47:42 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:26:56.000 19:47:42 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:56.000 19:47:42 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:26:56.000 19:47:42 -- scripts/common.sh@15 -- # local i 00:26:56.000 19:47:42 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:26:56.000 19:47:42 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:56.000 19:47:42 -- scripts/common.sh@24 -- # return 0 00:26:56.000 19:47:42 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:26:56.000 19:47:42 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:56.000 19:47:42 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:26:56.000 19:47:42 -- scripts/common.sh@322 -- # uname -s 00:26:56.000 19:47:42 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:56.000 19:47:42 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:56.000 19:47:42 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:56.000 19:47:42 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:26:56.000 19:47:42 -- scripts/common.sh@322 -- # uname -s 00:26:56.000 19:47:42 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:56.000 19:47:42 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:56.000 19:47:42 -- scripts/common.sh@327 -- # (( 2 )) 00:26:56.000 19:47:42 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:26:56.000 19:47:42 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:26:56.000 19:47:42 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:26:56.000 19:47:42 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:26:56.000 19:47:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:56.000 19:47:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:56.000 19:47:42 -- common/autotest_common.sh@10 -- # set +x 00:26:56.000 ************************************ 00:26:56.000 START TEST spdk_target_abort 00:26:56.000 ************************************ 00:26:56.000 19:47:42 -- common/autotest_common.sh@1114 -- # spdk_target 00:26:56.000 19:47:42 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:56.000 19:47:42 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:26:56.000 19:47:42 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:26:56.000 19:47:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.000 19:47:42 -- common/autotest_common.sh@10 -- # set +x 00:26:56.000 spdk_targetn1 00:26:56.000 19:47:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.000 19:47:42 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:56.000 19:47:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.000 19:47:42 -- common/autotest_common.sh@10 -- # set +x 00:26:56.000 [2024-12-15 19:47:42.817357] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:56.000 19:47:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.000 19:47:42 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:26:56.000 19:47:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.000 19:47:42 -- common/autotest_common.sh@10 -- # set +x 00:26:56.000 19:47:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.000 19:47:42 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:26:56.000 19:47:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.000 19:47:42 -- common/autotest_common.sh@10 -- # set +x 00:26:56.000 19:47:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.000 19:47:42 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:26:56.000 19:47:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.000 19:47:42 -- common/autotest_common.sh@10 -- # set +x 00:26:56.000 [2024-12-15 19:47:42.849724] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:56.000 19:47:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.000 19:47:42 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:26:56.000 19:47:42 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:56.000 19:47:42 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:56.000 19:47:42 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:26:56.000 19:47:42 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:56.000 19:47:42 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:26:56.000 19:47:42 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:56.000 19:47:42 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:56.000 19:47:42 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:56.000 19:47:42 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:56.000 19:47:42 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:56.000 19:47:42 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:56.000 19:47:42 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:56.000 19:47:42 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:56.000 19:47:42 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:26:56.000 19:47:42 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:56.000 19:47:42 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:56.000 19:47:42 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:56.000 19:47:42 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:56.000 19:47:42 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:56.000 19:47:42 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:59.291 Initializing NVMe Controllers 00:26:59.291 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:59.291 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:59.291 Initialization complete. Launching workers. 00:26:59.291 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 10822, failed: 0 00:26:59.291 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1137, failed to submit 9685 00:26:59.291 success 742, unsuccess 395, failed 0 00:26:59.291 19:47:46 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:59.291 19:47:46 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:27:02.575 Initializing NVMe Controllers 00:27:02.575 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:27:02.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:27:02.575 Initialization complete. Launching workers. 00:27:02.575 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 5991, failed: 0 00:27:02.575 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1223, failed to submit 4768 00:27:02.575 success 262, unsuccess 961, failed 0 00:27:02.575 19:47:49 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:02.575 19:47:49 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:27:05.875 Initializing NVMe Controllers 00:27:05.875 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:27:05.875 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:27:05.875 Initialization complete. Launching workers. 00:27:05.875 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 31019, failed: 0 00:27:05.875 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2652, failed to submit 28367 00:27:05.875 success 481, unsuccess 2171, failed 0 00:27:05.875 19:47:52 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:27:05.875 19:47:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.875 19:47:52 -- common/autotest_common.sh@10 -- # set +x 00:27:05.875 19:47:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.875 19:47:52 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:27:05.875 19:47:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.875 19:47:52 -- common/autotest_common.sh@10 -- # set +x 00:27:06.134 19:47:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.134 19:47:52 -- target/abort_qd_sizes.sh@62 -- # killprocess 103366 00:27:06.134 19:47:52 -- common/autotest_common.sh@936 -- # '[' -z 103366 ']' 00:27:06.134 19:47:52 -- common/autotest_common.sh@940 -- # kill -0 103366 00:27:06.134 19:47:52 -- common/autotest_common.sh@941 -- # uname 00:27:06.134 19:47:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:06.134 19:47:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 103366 00:27:06.134 19:47:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:06.134 killing process with pid 103366 00:27:06.134 19:47:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:06.134 19:47:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 103366' 00:27:06.134 19:47:53 -- common/autotest_common.sh@955 -- # kill 103366 00:27:06.134 19:47:53 -- common/autotest_common.sh@960 -- # wait 103366 00:27:06.702 00:27:06.702 real 0m10.598s 00:27:06.702 user 0m43.453s 00:27:06.702 sys 0m1.702s 00:27:06.702 19:47:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:06.702 19:47:53 -- common/autotest_common.sh@10 -- # set +x 00:27:06.702 ************************************ 00:27:06.702 END TEST spdk_target_abort 00:27:06.702 ************************************ 00:27:06.702 19:47:53 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:27:06.702 19:47:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:06.702 19:47:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:06.702 19:47:53 -- common/autotest_common.sh@10 -- # set +x 00:27:06.702 ************************************ 00:27:06.702 START TEST kernel_target_abort 00:27:06.702 ************************************ 00:27:06.702 19:47:53 -- common/autotest_common.sh@1114 -- # kernel_target 00:27:06.702 19:47:53 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:27:06.702 19:47:53 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:27:06.702 19:47:53 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:27:06.702 19:47:53 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:27:06.702 19:47:53 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:27:06.702 19:47:53 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:27:06.702 19:47:53 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:06.702 19:47:53 -- nvmf/common.sh@627 -- # local block nvme 00:27:06.702 19:47:53 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:27:06.703 19:47:53 -- nvmf/common.sh@630 -- # modprobe nvmet 00:27:06.703 19:47:53 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:06.703 19:47:53 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:06.962 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:06.962 Waiting for block devices as requested 00:27:06.962 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:27:07.220 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:27:07.220 19:47:53 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:27:07.220 19:47:53 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:07.220 19:47:53 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:27:07.220 19:47:53 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:27:07.220 19:47:53 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:27:07.220 No valid GPT data, bailing 00:27:07.220 19:47:54 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:07.220 19:47:54 -- scripts/common.sh@393 -- # pt= 00:27:07.220 19:47:54 -- scripts/common.sh@394 -- # return 1 00:27:07.220 19:47:54 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:27:07.220 19:47:54 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:27:07.220 19:47:54 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:27:07.220 19:47:54 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:27:07.220 19:47:54 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:27:07.220 19:47:54 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:27:07.480 No valid GPT data, bailing 00:27:07.480 19:47:54 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:27:07.480 19:47:54 -- scripts/common.sh@393 -- # pt= 00:27:07.480 19:47:54 -- scripts/common.sh@394 -- # return 1 00:27:07.480 19:47:54 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:27:07.480 19:47:54 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:27:07.480 19:47:54 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:27:07.480 19:47:54 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:27:07.480 19:47:54 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:27:07.480 19:47:54 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:27:07.480 No valid GPT data, bailing 00:27:07.480 19:47:54 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:27:07.480 19:47:54 -- scripts/common.sh@393 -- # pt= 00:27:07.480 19:47:54 -- scripts/common.sh@394 -- # return 1 00:27:07.480 19:47:54 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:27:07.480 19:47:54 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:27:07.480 19:47:54 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:27:07.480 19:47:54 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:27:07.480 19:47:54 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:27:07.480 19:47:54 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:27:07.480 No valid GPT data, bailing 00:27:07.480 19:47:54 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:27:07.480 19:47:54 -- scripts/common.sh@393 -- # pt= 00:27:07.480 19:47:54 -- scripts/common.sh@394 -- # return 1 00:27:07.480 19:47:54 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:27:07.480 19:47:54 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:27:07.480 19:47:54 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:27:07.480 19:47:54 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:27:07.480 19:47:54 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:07.480 19:47:54 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:27:07.480 19:47:54 -- nvmf/common.sh@654 -- # echo 1 00:27:07.480 19:47:54 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:27:07.480 19:47:54 -- nvmf/common.sh@656 -- # echo 1 00:27:07.480 19:47:54 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:27:07.480 19:47:54 -- nvmf/common.sh@663 -- # echo tcp 00:27:07.480 19:47:54 -- nvmf/common.sh@664 -- # echo 4420 00:27:07.480 19:47:54 -- nvmf/common.sh@665 -- # echo ipv4 00:27:07.480 19:47:54 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:07.480 19:47:54 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 --hostid=09a7e6b1-704d-4311-bcab-2c5a8f9a03c1 -a 10.0.0.1 -t tcp -s 4420 00:27:07.480 00:27:07.480 Discovery Log Number of Records 2, Generation counter 2 00:27:07.480 =====Discovery Log Entry 0====== 00:27:07.480 trtype: tcp 00:27:07.480 adrfam: ipv4 00:27:07.480 subtype: current discovery subsystem 00:27:07.480 treq: not specified, sq flow control disable supported 00:27:07.480 portid: 1 00:27:07.480 trsvcid: 4420 00:27:07.480 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:07.480 traddr: 10.0.0.1 00:27:07.480 eflags: none 00:27:07.480 sectype: none 00:27:07.480 =====Discovery Log Entry 1====== 00:27:07.480 trtype: tcp 00:27:07.480 adrfam: ipv4 00:27:07.480 subtype: nvme subsystem 00:27:07.480 treq: not specified, sq flow control disable supported 00:27:07.480 portid: 1 00:27:07.480 trsvcid: 4420 00:27:07.480 subnqn: kernel_target 00:27:07.480 traddr: 10.0.0.1 00:27:07.480 eflags: none 00:27:07.480 sectype: none 00:27:07.480 19:47:54 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:27:07.480 19:47:54 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:07.480 19:47:54 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:07.480 19:47:54 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:27:07.480 19:47:54 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:07.480 19:47:54 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:27:07.480 19:47:54 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:07.480 19:47:54 -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:07.480 19:47:54 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:07.480 19:47:54 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:07.480 19:47:54 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:07.480 19:47:54 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:07.480 19:47:54 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:07.480 19:47:54 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:07.480 19:47:54 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:27:07.480 19:47:54 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:07.480 19:47:54 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:27:07.480 19:47:54 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:07.480 19:47:54 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:27:07.739 19:47:54 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:07.739 19:47:54 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:27:11.025 Initializing NVMe Controllers 00:27:11.025 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:27:11.025 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:27:11.025 Initialization complete. Launching workers. 00:27:11.025 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 31160, failed: 0 00:27:11.025 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 31160, failed to submit 0 00:27:11.025 success 0, unsuccess 31160, failed 0 00:27:11.025 19:47:57 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:11.025 19:47:57 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:27:14.314 Initializing NVMe Controllers 00:27:14.314 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:27:14.314 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:27:14.314 Initialization complete. Launching workers. 00:27:14.314 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 65820, failed: 0 00:27:14.314 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 27347, failed to submit 38473 00:27:14.314 success 0, unsuccess 27347, failed 0 00:27:14.314 19:48:00 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:14.314 19:48:00 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:27:17.600 Initializing NVMe Controllers 00:27:17.600 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:27:17.600 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:27:17.600 Initialization complete. Launching workers. 00:27:17.600 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 73808, failed: 0 00:27:17.600 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 18470, failed to submit 55338 00:27:17.600 success 0, unsuccess 18470, failed 0 00:27:17.600 19:48:03 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:27:17.600 19:48:03 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:27:17.600 19:48:03 -- nvmf/common.sh@677 -- # echo 0 00:27:17.600 19:48:03 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:27:17.600 19:48:03 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:27:17.600 19:48:03 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:17.600 19:48:03 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:27:17.600 19:48:03 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:27:17.600 19:48:03 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:27:17.600 00:27:17.600 real 0m10.578s 00:27:17.600 user 0m5.399s 00:27:17.600 sys 0m2.520s 00:27:17.600 19:48:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:17.600 19:48:03 -- common/autotest_common.sh@10 -- # set +x 00:27:17.600 ************************************ 00:27:17.600 END TEST kernel_target_abort 00:27:17.600 ************************************ 00:27:17.600 19:48:04 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:27:17.600 19:48:04 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:27:17.600 19:48:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:17.600 19:48:04 -- nvmf/common.sh@116 -- # sync 00:27:17.600 19:48:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:17.600 19:48:04 -- nvmf/common.sh@119 -- # set +e 00:27:17.600 19:48:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:17.600 19:48:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:17.600 rmmod nvme_tcp 00:27:17.600 rmmod nvme_fabrics 00:27:17.600 rmmod nvme_keyring 00:27:17.600 19:48:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:17.600 19:48:04 -- nvmf/common.sh@123 -- # set -e 00:27:17.600 19:48:04 -- nvmf/common.sh@124 -- # return 0 00:27:17.600 19:48:04 -- nvmf/common.sh@477 -- # '[' -n 103366 ']' 00:27:17.600 19:48:04 -- nvmf/common.sh@478 -- # killprocess 103366 00:27:17.600 19:48:04 -- common/autotest_common.sh@936 -- # '[' -z 103366 ']' 00:27:17.600 19:48:04 -- common/autotest_common.sh@940 -- # kill -0 103366 00:27:17.600 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (103366) - No such process 00:27:17.600 Process with pid 103366 is not found 00:27:17.600 19:48:04 -- common/autotest_common.sh@963 -- # echo 'Process with pid 103366 is not found' 00:27:17.600 19:48:04 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:27:17.600 19:48:04 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:18.168 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:18.168 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:27:18.168 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:27:18.168 19:48:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:18.168 19:48:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:18.168 19:48:04 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:18.168 19:48:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:18.168 19:48:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.168 19:48:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:18.168 19:48:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.168 19:48:04 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:27:18.168 00:27:18.168 real 0m24.800s 00:27:18.168 user 0m50.338s 00:27:18.168 sys 0m5.597s 00:27:18.168 19:48:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:18.168 ************************************ 00:27:18.168 END TEST nvmf_abort_qd_sizes 00:27:18.168 19:48:04 -- common/autotest_common.sh@10 -- # set +x 00:27:18.168 ************************************ 00:27:18.168 19:48:04 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:27:18.168 19:48:04 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:27:18.168 19:48:04 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:27:18.168 19:48:04 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:27:18.168 19:48:04 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:27:18.168 19:48:04 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:27:18.168 19:48:04 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:27:18.168 19:48:04 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:27:18.168 19:48:04 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:27:18.168 19:48:04 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:27:18.168 19:48:04 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:27:18.168 19:48:04 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:27:18.168 19:48:04 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:27:18.168 19:48:04 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:27:18.168 19:48:04 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:27:18.168 19:48:04 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:27:18.168 19:48:04 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:27:18.168 19:48:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:18.168 19:48:04 -- common/autotest_common.sh@10 -- # set +x 00:27:18.168 19:48:04 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:27:18.168 19:48:04 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:27:18.168 19:48:04 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:27:18.168 19:48:04 -- common/autotest_common.sh@10 -- # set +x 00:27:20.071 INFO: APP EXITING 00:27:20.071 INFO: killing all VMs 00:27:20.071 INFO: killing vhost app 00:27:20.071 INFO: EXIT DONE 00:27:20.638 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:20.638 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:27:20.638 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:27:21.576 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:21.576 Cleaning 00:27:21.576 Removing: /var/run/dpdk/spdk0/config 00:27:21.576 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:21.576 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:21.576 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:21.576 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:21.576 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:21.576 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:21.576 Removing: /var/run/dpdk/spdk1/config 00:27:21.576 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:27:21.576 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:27:21.576 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:27:21.576 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:27:21.576 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:27:21.576 Removing: /var/run/dpdk/spdk1/hugepage_info 00:27:21.576 Removing: /var/run/dpdk/spdk2/config 00:27:21.576 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:27:21.576 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:27:21.576 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:27:21.576 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:27:21.576 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:27:21.576 Removing: /var/run/dpdk/spdk2/hugepage_info 00:27:21.576 Removing: /var/run/dpdk/spdk3/config 00:27:21.576 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:27:21.576 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:27:21.576 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:27:21.576 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:27:21.576 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:27:21.576 Removing: /var/run/dpdk/spdk3/hugepage_info 00:27:21.576 Removing: /var/run/dpdk/spdk4/config 00:27:21.576 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:27:21.576 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:27:21.576 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:27:21.576 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:27:21.576 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:27:21.576 Removing: /var/run/dpdk/spdk4/hugepage_info 00:27:21.576 Removing: /dev/shm/nvmf_trace.0 00:27:21.576 Removing: /dev/shm/spdk_tgt_trace.pid67259 00:27:21.576 Removing: /var/run/dpdk/spdk0 00:27:21.576 Removing: /var/run/dpdk/spdk1 00:27:21.576 Removing: /var/run/dpdk/spdk2 00:27:21.576 Removing: /var/run/dpdk/spdk3 00:27:21.576 Removing: /var/run/dpdk/spdk4 00:27:21.576 Removing: /var/run/dpdk/spdk_pid100331 00:27:21.576 Removing: /var/run/dpdk/spdk_pid100536 00:27:21.576 Removing: /var/run/dpdk/spdk_pid100827 00:27:21.576 Removing: /var/run/dpdk/spdk_pid101132 00:27:21.576 Removing: /var/run/dpdk/spdk_pid101695 00:27:21.576 Removing: /var/run/dpdk/spdk_pid101700 00:27:21.576 Removing: /var/run/dpdk/spdk_pid102072 00:27:21.576 Removing: /var/run/dpdk/spdk_pid102232 00:27:21.576 Removing: /var/run/dpdk/spdk_pid102390 00:27:21.576 Removing: /var/run/dpdk/spdk_pid102487 00:27:21.576 Removing: /var/run/dpdk/spdk_pid102642 00:27:21.576 Removing: /var/run/dpdk/spdk_pid102751 00:27:21.576 Removing: /var/run/dpdk/spdk_pid103441 00:27:21.576 Removing: /var/run/dpdk/spdk_pid103476 00:27:21.576 Removing: /var/run/dpdk/spdk_pid103507 00:27:21.576 Removing: /var/run/dpdk/spdk_pid103757 00:27:21.576 Removing: /var/run/dpdk/spdk_pid103794 00:27:21.576 Removing: /var/run/dpdk/spdk_pid103824 00:27:21.576 Removing: /var/run/dpdk/spdk_pid67102 00:27:21.576 Removing: /var/run/dpdk/spdk_pid67259 00:27:21.576 Removing: /var/run/dpdk/spdk_pid67576 00:27:21.576 Removing: /var/run/dpdk/spdk_pid67845 00:27:21.576 Removing: /var/run/dpdk/spdk_pid68028 00:27:21.576 Removing: /var/run/dpdk/spdk_pid68118 00:27:21.576 Removing: /var/run/dpdk/spdk_pid68217 00:27:21.576 Removing: /var/run/dpdk/spdk_pid68319 00:27:21.576 Removing: /var/run/dpdk/spdk_pid68363 00:27:21.576 Removing: /var/run/dpdk/spdk_pid68393 00:27:21.576 Removing: /var/run/dpdk/spdk_pid68456 00:27:21.576 Removing: /var/run/dpdk/spdk_pid68579 00:27:21.576 Removing: /var/run/dpdk/spdk_pid69216 00:27:21.576 Removing: /var/run/dpdk/spdk_pid69279 00:27:21.576 Removing: /var/run/dpdk/spdk_pid69344 00:27:21.576 Removing: /var/run/dpdk/spdk_pid69373 00:27:21.576 Removing: /var/run/dpdk/spdk_pid69452 00:27:21.576 Removing: /var/run/dpdk/spdk_pid69480 00:27:21.576 Removing: /var/run/dpdk/spdk_pid69559 00:27:21.576 Removing: /var/run/dpdk/spdk_pid69587 00:27:21.576 Removing: /var/run/dpdk/spdk_pid69644 00:27:21.576 Removing: /var/run/dpdk/spdk_pid69674 00:27:21.576 Removing: /var/run/dpdk/spdk_pid69721 00:27:21.576 Removing: /var/run/dpdk/spdk_pid69754 00:27:21.576 Removing: /var/run/dpdk/spdk_pid69928 00:27:21.576 Removing: /var/run/dpdk/spdk_pid69958 00:27:21.576 Removing: /var/run/dpdk/spdk_pid70045 00:27:21.576 Removing: /var/run/dpdk/spdk_pid70122 00:27:21.576 Removing: /var/run/dpdk/spdk_pid70141 00:27:21.576 Removing: /var/run/dpdk/spdk_pid70205 00:27:21.576 Removing: /var/run/dpdk/spdk_pid70219 00:27:21.576 Removing: /var/run/dpdk/spdk_pid70259 00:27:21.835 Removing: /var/run/dpdk/spdk_pid70273 00:27:21.835 Removing: /var/run/dpdk/spdk_pid70313 00:27:21.835 Removing: /var/run/dpdk/spdk_pid70327 00:27:21.835 Removing: /var/run/dpdk/spdk_pid70364 00:27:21.835 Removing: /var/run/dpdk/spdk_pid70381 00:27:21.835 Removing: /var/run/dpdk/spdk_pid70416 00:27:21.835 Removing: /var/run/dpdk/spdk_pid70437 00:27:21.835 Removing: /var/run/dpdk/spdk_pid70472 00:27:21.835 Removing: /var/run/dpdk/spdk_pid70491 00:27:21.835 Removing: /var/run/dpdk/spdk_pid70526 00:27:21.835 Removing: /var/run/dpdk/spdk_pid70546 00:27:21.835 Removing: /var/run/dpdk/spdk_pid70581 00:27:21.835 Removing: /var/run/dpdk/spdk_pid70599 00:27:21.835 Removing: /var/run/dpdk/spdk_pid70635 00:27:21.835 Removing: /var/run/dpdk/spdk_pid70654 00:27:21.835 Removing: /var/run/dpdk/spdk_pid70689 00:27:21.835 Removing: /var/run/dpdk/spdk_pid70703 00:27:21.835 Removing: /var/run/dpdk/spdk_pid70743 00:27:21.835 Removing: /var/run/dpdk/spdk_pid70757 00:27:21.835 Removing: /var/run/dpdk/spdk_pid70800 00:27:21.835 Removing: /var/run/dpdk/spdk_pid70814 00:27:21.835 Removing: /var/run/dpdk/spdk_pid70854 00:27:21.835 Removing: /var/run/dpdk/spdk_pid70868 00:27:21.835 Removing: /var/run/dpdk/spdk_pid70908 00:27:21.835 Removing: /var/run/dpdk/spdk_pid70922 00:27:21.835 Removing: /var/run/dpdk/spdk_pid70962 00:27:21.835 Removing: /var/run/dpdk/spdk_pid70976 00:27:21.835 Removing: /var/run/dpdk/spdk_pid71016 00:27:21.835 Removing: /var/run/dpdk/spdk_pid71030 00:27:21.835 Removing: /var/run/dpdk/spdk_pid71065 00:27:21.835 Removing: /var/run/dpdk/spdk_pid71087 00:27:21.835 Removing: /var/run/dpdk/spdk_pid71125 00:27:21.835 Removing: /var/run/dpdk/spdk_pid71147 00:27:21.835 Removing: /var/run/dpdk/spdk_pid71185 00:27:21.835 Removing: /var/run/dpdk/spdk_pid71204 00:27:21.835 Removing: /var/run/dpdk/spdk_pid71239 00:27:21.835 Removing: /var/run/dpdk/spdk_pid71258 00:27:21.835 Removing: /var/run/dpdk/spdk_pid71294 00:27:21.835 Removing: /var/run/dpdk/spdk_pid71371 00:27:21.835 Removing: /var/run/dpdk/spdk_pid71470 00:27:21.835 Removing: /var/run/dpdk/spdk_pid71915 00:27:21.835 Removing: /var/run/dpdk/spdk_pid78884 00:27:21.835 Removing: /var/run/dpdk/spdk_pid79231 00:27:21.835 Removing: /var/run/dpdk/spdk_pid81668 00:27:21.835 Removing: /var/run/dpdk/spdk_pid82061 00:27:21.835 Removing: /var/run/dpdk/spdk_pid82304 00:27:21.835 Removing: /var/run/dpdk/spdk_pid82349 00:27:21.835 Removing: /var/run/dpdk/spdk_pid82670 00:27:21.835 Removing: /var/run/dpdk/spdk_pid82720 00:27:21.835 Removing: /var/run/dpdk/spdk_pid83106 00:27:21.835 Removing: /var/run/dpdk/spdk_pid83639 00:27:21.835 Removing: /var/run/dpdk/spdk_pid84073 00:27:21.835 Removing: /var/run/dpdk/spdk_pid85049 00:27:21.835 Removing: /var/run/dpdk/spdk_pid86049 00:27:21.835 Removing: /var/run/dpdk/spdk_pid86161 00:27:21.835 Removing: /var/run/dpdk/spdk_pid86230 00:27:21.835 Removing: /var/run/dpdk/spdk_pid87723 00:27:21.835 Removing: /var/run/dpdk/spdk_pid87965 00:27:21.835 Removing: /var/run/dpdk/spdk_pid88427 00:27:21.835 Removing: /var/run/dpdk/spdk_pid88540 00:27:21.835 Removing: /var/run/dpdk/spdk_pid88686 00:27:21.835 Removing: /var/run/dpdk/spdk_pid88732 00:27:21.835 Removing: /var/run/dpdk/spdk_pid88777 00:27:21.835 Removing: /var/run/dpdk/spdk_pid88823 00:27:21.835 Removing: /var/run/dpdk/spdk_pid88986 00:27:21.835 Removing: /var/run/dpdk/spdk_pid89140 00:27:21.835 Removing: /var/run/dpdk/spdk_pid89406 00:27:21.835 Removing: /var/run/dpdk/spdk_pid89529 00:27:21.835 Removing: /var/run/dpdk/spdk_pid89957 00:27:21.835 Removing: /var/run/dpdk/spdk_pid90345 00:27:21.835 Removing: /var/run/dpdk/spdk_pid90347 00:27:21.835 Removing: /var/run/dpdk/spdk_pid92598 00:27:21.835 Removing: /var/run/dpdk/spdk_pid92918 00:27:21.835 Removing: /var/run/dpdk/spdk_pid93439 00:27:21.835 Removing: /var/run/dpdk/spdk_pid93441 00:27:21.835 Removing: /var/run/dpdk/spdk_pid93789 00:27:21.835 Removing: /var/run/dpdk/spdk_pid93809 00:27:21.835 Removing: /var/run/dpdk/spdk_pid93823 00:27:21.835 Removing: /var/run/dpdk/spdk_pid93848 00:27:21.835 Removing: /var/run/dpdk/spdk_pid93865 00:27:21.835 Removing: /var/run/dpdk/spdk_pid94005 00:27:21.835 Removing: /var/run/dpdk/spdk_pid94011 00:27:21.835 Removing: /var/run/dpdk/spdk_pid94115 00:27:21.835 Removing: /var/run/dpdk/spdk_pid94123 00:27:21.835 Removing: /var/run/dpdk/spdk_pid94231 00:27:21.835 Removing: /var/run/dpdk/spdk_pid94233 00:27:21.835 Removing: /var/run/dpdk/spdk_pid94721 00:27:22.094 Removing: /var/run/dpdk/spdk_pid94764 00:27:22.094 Removing: /var/run/dpdk/spdk_pid94922 00:27:22.094 Removing: /var/run/dpdk/spdk_pid95042 00:27:22.094 Removing: /var/run/dpdk/spdk_pid95441 00:27:22.094 Removing: /var/run/dpdk/spdk_pid95696 00:27:22.094 Removing: /var/run/dpdk/spdk_pid96201 00:27:22.094 Removing: /var/run/dpdk/spdk_pid96770 00:27:22.094 Removing: /var/run/dpdk/spdk_pid97225 00:27:22.094 Removing: /var/run/dpdk/spdk_pid97320 00:27:22.094 Removing: /var/run/dpdk/spdk_pid97406 00:27:22.094 Removing: /var/run/dpdk/spdk_pid97478 00:27:22.094 Removing: /var/run/dpdk/spdk_pid97614 00:27:22.094 Removing: /var/run/dpdk/spdk_pid97704 00:27:22.094 Removing: /var/run/dpdk/spdk_pid97799 00:27:22.094 Removing: /var/run/dpdk/spdk_pid97885 00:27:22.094 Removing: /var/run/dpdk/spdk_pid98253 00:27:22.094 Removing: /var/run/dpdk/spdk_pid98961 00:27:22.094 Clean 00:27:22.094 killing process with pid 61520 00:27:22.094 killing process with pid 61521 00:27:22.094 19:48:08 -- common/autotest_common.sh@1446 -- # return 0 00:27:22.094 19:48:08 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:27:22.094 19:48:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:22.094 19:48:08 -- common/autotest_common.sh@10 -- # set +x 00:27:22.094 19:48:08 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:27:22.094 19:48:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:22.094 19:48:08 -- common/autotest_common.sh@10 -- # set +x 00:27:22.352 19:48:08 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:22.352 19:48:08 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:27:22.352 19:48:08 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:27:22.352 19:48:09 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:27:22.352 19:48:09 -- spdk/autotest.sh@383 -- # hostname 00:27:22.352 19:48:09 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:27:22.352 geninfo: WARNING: invalid characters removed from testname! 00:27:44.355 19:48:28 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:44.923 19:48:31 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:47.486 19:48:33 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:49.390 19:48:36 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:51.923 19:48:38 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:54.456 19:48:40 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:56.359 19:48:43 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:56.359 19:48:43 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:27:56.359 19:48:43 -- common/autotest_common.sh@1690 -- $ lcov --version 00:27:56.359 19:48:43 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:27:56.359 19:48:43 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:27:56.359 19:48:43 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:27:56.359 19:48:43 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:27:56.359 19:48:43 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:27:56.359 19:48:43 -- scripts/common.sh@335 -- $ IFS=.-: 00:27:56.359 19:48:43 -- scripts/common.sh@335 -- $ read -ra ver1 00:27:56.359 19:48:43 -- scripts/common.sh@336 -- $ IFS=.-: 00:27:56.359 19:48:43 -- scripts/common.sh@336 -- $ read -ra ver2 00:27:56.359 19:48:43 -- scripts/common.sh@337 -- $ local 'op=<' 00:27:56.359 19:48:43 -- scripts/common.sh@339 -- $ ver1_l=2 00:27:56.359 19:48:43 -- scripts/common.sh@340 -- $ ver2_l=1 00:27:56.359 19:48:43 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:27:56.359 19:48:43 -- scripts/common.sh@343 -- $ case "$op" in 00:27:56.359 19:48:43 -- scripts/common.sh@344 -- $ : 1 00:27:56.359 19:48:43 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:27:56.359 19:48:43 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:56.359 19:48:43 -- scripts/common.sh@364 -- $ decimal 1 00:27:56.359 19:48:43 -- scripts/common.sh@352 -- $ local d=1 00:27:56.359 19:48:43 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:27:56.359 19:48:43 -- scripts/common.sh@354 -- $ echo 1 00:27:56.359 19:48:43 -- scripts/common.sh@364 -- $ ver1[v]=1 00:27:56.359 19:48:43 -- scripts/common.sh@365 -- $ decimal 2 00:27:56.359 19:48:43 -- scripts/common.sh@352 -- $ local d=2 00:27:56.359 19:48:43 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:27:56.359 19:48:43 -- scripts/common.sh@354 -- $ echo 2 00:27:56.359 19:48:43 -- scripts/common.sh@365 -- $ ver2[v]=2 00:27:56.359 19:48:43 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:27:56.359 19:48:43 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:27:56.359 19:48:43 -- scripts/common.sh@367 -- $ return 0 00:27:56.359 19:48:43 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:56.359 19:48:43 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:27:56.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.359 --rc genhtml_branch_coverage=1 00:27:56.359 --rc genhtml_function_coverage=1 00:27:56.359 --rc genhtml_legend=1 00:27:56.359 --rc geninfo_all_blocks=1 00:27:56.359 --rc geninfo_unexecuted_blocks=1 00:27:56.359 00:27:56.359 ' 00:27:56.359 19:48:43 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:27:56.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.359 --rc genhtml_branch_coverage=1 00:27:56.359 --rc genhtml_function_coverage=1 00:27:56.359 --rc genhtml_legend=1 00:27:56.359 --rc geninfo_all_blocks=1 00:27:56.359 --rc geninfo_unexecuted_blocks=1 00:27:56.359 00:27:56.359 ' 00:27:56.359 19:48:43 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:27:56.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.359 --rc genhtml_branch_coverage=1 00:27:56.359 --rc genhtml_function_coverage=1 00:27:56.359 --rc genhtml_legend=1 00:27:56.359 --rc geninfo_all_blocks=1 00:27:56.359 --rc geninfo_unexecuted_blocks=1 00:27:56.359 00:27:56.359 ' 00:27:56.359 19:48:43 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:27:56.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.359 --rc genhtml_branch_coverage=1 00:27:56.359 --rc genhtml_function_coverage=1 00:27:56.359 --rc genhtml_legend=1 00:27:56.359 --rc geninfo_all_blocks=1 00:27:56.359 --rc geninfo_unexecuted_blocks=1 00:27:56.359 00:27:56.359 ' 00:27:56.359 19:48:43 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:56.359 19:48:43 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:27:56.359 19:48:43 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:56.359 19:48:43 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:56.359 19:48:43 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.360 19:48:43 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.360 19:48:43 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.360 19:48:43 -- paths/export.sh@5 -- $ export PATH 00:27:56.360 19:48:43 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.360 19:48:43 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:27:56.360 19:48:43 -- common/autobuild_common.sh@440 -- $ date +%s 00:27:56.360 19:48:43 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1734292123.XXXXXX 00:27:56.360 19:48:43 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1734292123.lCemcK 00:27:56.360 19:48:43 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:27:56.360 19:48:43 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:27:56.360 19:48:43 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:27:56.360 19:48:43 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:27:56.360 19:48:43 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:27:56.360 19:48:43 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:27:56.360 19:48:43 -- common/autobuild_common.sh@456 -- $ get_config_params 00:27:56.360 19:48:43 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:27:56.360 19:48:43 -- common/autotest_common.sh@10 -- $ set +x 00:27:56.618 19:48:43 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:27:56.618 19:48:43 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:27:56.618 19:48:43 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:27:56.618 19:48:43 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:27:56.618 19:48:43 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:27:56.618 19:48:43 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:27:56.618 19:48:43 -- spdk/autopackage.sh@19 -- $ timing_finish 00:27:56.618 19:48:43 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:56.618 19:48:43 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:27:56.618 19:48:43 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:56.618 19:48:43 -- spdk/autopackage.sh@20 -- $ exit 0 00:27:56.618 + [[ -n 5970 ]] 00:27:56.618 + sudo kill 5970 00:27:56.627 [Pipeline] } 00:27:56.643 [Pipeline] // timeout 00:27:56.648 [Pipeline] } 00:27:56.663 [Pipeline] // stage 00:27:56.668 [Pipeline] } 00:27:56.683 [Pipeline] // catchError 00:27:56.692 [Pipeline] stage 00:27:56.695 [Pipeline] { (Stop VM) 00:27:56.707 [Pipeline] sh 00:27:56.988 + vagrant halt 00:28:00.274 ==> default: Halting domain... 00:28:06.851 [Pipeline] sh 00:28:07.131 + vagrant destroy -f 00:28:10.417 ==> default: Removing domain... 00:28:10.429 [Pipeline] sh 00:28:10.709 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:28:10.718 [Pipeline] } 00:28:10.732 [Pipeline] // stage 00:28:10.737 [Pipeline] } 00:28:10.751 [Pipeline] // dir 00:28:10.756 [Pipeline] } 00:28:10.770 [Pipeline] // wrap 00:28:10.776 [Pipeline] } 00:28:10.788 [Pipeline] // catchError 00:28:10.797 [Pipeline] stage 00:28:10.799 [Pipeline] { (Epilogue) 00:28:10.812 [Pipeline] sh 00:28:11.093 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:16.375 [Pipeline] catchError 00:28:16.376 [Pipeline] { 00:28:16.389 [Pipeline] sh 00:28:16.670 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:16.942 Artifacts sizes are good 00:28:16.982 [Pipeline] } 00:28:16.996 [Pipeline] // catchError 00:28:17.007 [Pipeline] archiveArtifacts 00:28:17.014 Archiving artifacts 00:28:17.128 [Pipeline] cleanWs 00:28:17.139 [WS-CLEANUP] Deleting project workspace... 00:28:17.139 [WS-CLEANUP] Deferred wipeout is used... 00:28:17.146 [WS-CLEANUP] done 00:28:17.150 [Pipeline] } 00:28:17.165 [Pipeline] // stage 00:28:17.171 [Pipeline] } 00:28:17.184 [Pipeline] // node 00:28:17.189 [Pipeline] End of Pipeline 00:28:17.241 Finished: SUCCESS